,

Balancing Fixed Costs and Agile Projects

Managing fixed-cost projects while trying to stay flexible with Agile methods can be a challenge. Fixed-cost projects come with clear budgets, scope, and deadlines, but they can lack the adaptability Agile is known for. At Elemental Concept, we’ve found ways to combine these two approaches to deliver great results while staying adaptable.

Fixed-cost projects are attractive to clients because they provide certainty, but that can get tricky when new challenges or requirements pop up during development. Agile is great for handling these changes, but it’s tough to stay flexible when the budget, scope, and timeline are locked in. So, how do we make it work?

We set expectations early. From the start, we talk to clients about the need to prioritise the most important features—the minimum viable product (MVP). By focusing on the core functionality first, we leave room to adjust other features later, without affecting the overall budget or deadlines. We also make it clear that changes down the line might require us to drop or tweak other features to keep everything on track.

We use a prioritised backlog to stay flexible. By getting the high-value features done first, we ensure that the most important parts of the project are delivered on time. The lower-priority features can be revisited and adjusted as needed based on feedback. This helps us stick to the fixed-cost limits while still being adaptable.

Regular feedback loops are key to keeping things flexible. We break the project into small, iterative sprints, delivering working software to the client regularly. This gives us plenty of chances to gather feedback and make changes along the way. If the client needs change, or they realise certain features aren’t as valuable as they thought, we can adjust the backlog without disrupting the project.

Handling change requests is one of the trickier parts of balancing fixed-cost and Agile. Change is inevitable, but scope creep can quickly derail a project. We encourage clients to raise changes early and remind them that adding new features may mean dropping or delaying others.

Lastly, building trust through transparency is essential. We keep clients in the loop with regular updates, share working software at every step, and document everything clearly. This builds trust and keeps clients comfortable with the Agile process, even within the fixed-cost framework.

To conclude, while fixed-cost projects have their limits, there are plenty of ways to stay flexible. By setting clear expectations, focusing on key features, staying open to feedback, and managing changes carefully, we’ve been able to successfully blend the reliability of fixed-cost with the flexibility of Agile.

 

,

Challenges of Misinformation on Social Media: A path to Enhanced Accountability

Over the past few years, social media companies have faced mounting pressure to regulate content accessible or promoted through their platforms while addressing the pervasive issue of misinformation. The prevailing sentiment is clear: “Social media companies should do more.” In response, these companies have implemented various measures to mitigate these challenges. However, from an observer’s perspective, the effectiveness of these measures in policing information remains questionable.

The authenticity and accuracy of information shared on social media platforms can often be dubious, extending to many media outlets, and is frequently labelled as “fake news.” Alarmingly, social media has become the primary source of information for a significant portion of the population. As a result, many individuals are increasingly looking to governments and regulators for guidelines and legislation that impose specific standards on social media companies.

From the standpoint of these companies, effectively monitoring the content disseminated through their platforms presents a complex and elusive challenge. The proliferation of advanced tools for generating content complicates this task. Although a variety of algorithms can be developed and deployed to identify and remove content that fails to meet certain criteria, relying solely on algorithmic techniques to discern fake content is a short-term solution at best. The rapid advancements in artificial intelligence (AI) will likely continue to enable the creation of increasingly sophisticated content that evades detection by automated systems. Social media companies are acutely aware of the challenges associated with monitoring content on their platforms. Consequently, they appear to be gravitating towards a decentralized approach, placing the responsibility of verifying the authenticity of information in the hands of individual users.

To better understand the rationale behind this reliance on users, consider the dynamics of a village. In such a setting, individuals are often well-acquainted with one another, and stories spread rapidly. An essential aspect of storytelling within the village is that each person who shares a story risks their reputation and how they are perceived by the community. This creates a tangible sense of accountability concerning the decision to share information.

In combating online fraud, many mainstream banks in the UK have adopted a similar philosophy, encouraging users to take some responsibility in the verification of the identity of those with whom they transact. Customers are frequently prompted to reconsider the recipient of their payments, leading many to take extra time to confirm the legitimacy (to the best of their knowledge) of the transaction. The stakes are high, as the risk of losing hard-earned money compels individuals to act judiciously. This dynamic fosters a strong implicit assumption that participants in the ecosystem will act responsibly.

However, this sense of accountability is often weaker in the realm of social media. Given the overwhelming volume of information, individuals may struggle to scrutinize all content they encounter, yet they may feel compelled to share, repost or like it. Information on social media is typically disseminated through various mediums, such as posts, shares, reposts, and likes. These actions can be likened to retelling a story within a community, as they make content accessible to the user’s network. Unlike in smaller communities, however, there is minimal individual risk associated with spreading incorrect information. Although a potential collective cost exists, the average user is unlikely to factor this into their decision-making process. Similarly, the lack of individual rewards for sharing accurate information further complicates the issue.

As such, it can be argued that the decentralized approach of shifting the responsibility of verifying content authenticity to users is impractical, if not unrealistic, within the current social media landscape.

Social media platforms have the potential to enhance accountability by emulating real-life ecosystems. A straightforward approach might involve penalizing users for contributing to the spread of fake news. By incentivizing users to share, like, or repost only content they believe to be true (to the best of their knowledge), platforms can instill a sense of accountability and responsibility at the individual level, thereby curbing the spread of misinformation. However, significant challenges arise in designing and implementing mechanisms to effectively deter users from sharing untrustworthy information. While it may be impossible to prevent the posting of misleading content entirely, rethinking the “share,” “like,” and “repost” mechanisms for each piece of content could provide a powerful tool for controlling content quality while still allowing users to determine what they wish to disseminate.

The proposed approach to enhance accountability and verification of information on social media can be conceptualized as a relaxed blockchain mechanism by adopting key principles of decentralization and transparency without the full complexity of traditional blockchain systems. In this framework, users collectively participate in verifying the authenticity of content while creating a transparent record of interactions. Instead of relying solely on algorithmic detection or centralized authority, this approach empowers individuals to contribute to the validation process, akin to how nodes in a blockchain network validate transactions. By recording each user’s actions, such as sharing and liking content, a decentralized ledger emerges that tracks the provenance of information. This structure promotes a sense of accountability among users, as they recognize that their contributions impact the overall integrity of the information ecosystem.

Moreover, while traditional blockchains often employ consensus mechanisms to validate transactions, the proposed solution can implement a more flexible form of consensus based on user interactions and reputations. This relaxed mechanism allows for a dynamic assessment of content credibility, where multiple users can weigh in on the authenticity of information without the stringent requirements of a fully decentralized blockchain. By leveraging community-driven validation, the system can adapt to the rapidly changing landscape of social media, fostering a collaborative environment where users are incentivized to share accurate information. This hybrid approach retains the core benefits of blockchain—such as transparency and accountability—while simplifying the verification process, making it more accessible and practical for everyday users navigating the complexities of online information.

,

Navigating the Intersection of Sustainable AI and Real-World Implementation: Challenges and Considerations

In the realm of sustainable AI, there is a growing emphasis on utilizing artificial intelligence to advance sustainable objectives. Recently, I participated in a conference hosted by Responsible AI UK (RAi UK) Responsible Ai UK and UK Research and Innovation (UKRI) UK Research and Innovation , which delved into the application of AI to facilitate the attainment of net zero goals in the UK. The discussions showcased intriguing initiatives led by researchers, ranging from leveraging AI to rejuvenate ecosystems by identifying optimal tree species based on environmental factors like soil quality and climate, to employing AI for carbon capture and storage and enhancing the energy efficiency of vehicles.

In light of these challenges, the following paragraphs reflect the personal perspective on the complexities surrounding energy efficiency in AI development and the trade-offs between accuracy and sustainability. While innovative ideas abound, a significant hurdle lies in bridging the gap between theoretical models and practical implementation in real-world scenarios.

Many AI projects encounter challenges in transitioning from development to market or public deployment due to the inherent limitations of models in fully capturing the complexities of their operational environments. Moreover, from a sustainability perspective, the computational demands of complex models for data generation, training, and operation pose significant energy consumption concerns, particularly in the absence of readily available data. Some argue that researchers should quantify the energy costs associated with developing, deploying, and operating AI models—a consideration that extends to all organizations involved in AI system development and deployment. However, measuring energy consumption accurately remains a formidable task, exacerbated by the reliance on cloud infrastructure. While cloud providers could offer insights into energy consumption at the instance or virtual CPU level, the feasibility and business benefits of such endeavors remain uncertain. Furthermore, many organizations engaged in AI development lack the incentive to prioritize energy efficiency in their operations.

Suggestions have been made to adopt energy-efficient algorithms or less power-intensive models; however, these proposals often overlook the profit-driven nature of most organizations and the inherent trade-off between energy efficiency and model accuracy. Balancing energy efficiency with accuracy necessitates a case-by-case evaluation aligned with the specific needs and goals of each business or adopter. Ultimately, the success of sustainable AI initiatives hinges on their alignment with the operational standards and business practices of organizations. The value of an AI system and its associated services lies in their ability to integrate seamlessly with an organization’s existing frameworks and operational norms.

,

Step 3 in how to become a Data Driven Business

Getting the data foundations right is absolutely critical to your journey to becoming a Data Driven Business, however, probably even more importantly, do you know whether your business is truly aligned with your value chain, i.e. how well matched are your strategies, organisational capabilities, resources, and management systems to support the enterprise’s purpose. If not, then anything else you do to convert your business to one driven by data, you will end up either wasting a lot of effort, or, if lucky, deliver only a small percentage of the potential.

In my previous blogs around this topic I pushed hard on the need to leverage the asset you already own – your historical data. I stand by this as a great starting point, however, recent projects experiences here at Elemental Concept have also opened up the need to often integrate 3rd party data to enhance your own sources, filling in gaps and adding perspectives that your own data maybe doesn’t cover. Great examples of these include using 3rd party data to challenge assumptions developed by looking inwards only, identify market opportunities missed, just because you’ve never measured yourself against competitors, or to predict customer churn to allow your account teams to intervene before it’s too late. A great example again from a recent customer, was to re-target marketing spend at under-serviced geographic markets, rather than wasting it on competitor saturated geographies.

Sadly, when you get to the end of the first implementation, all you are doing is starting the next, as you need to feed and tweak the data model continuously to keep pace with business changes, competitive threats and, god forbid, global pandemics, get it right though and at least you’ll be able to pivot your offering based on analytics rather than a hunch. That’s got to be the way forward hasn’t it?

Use this Risk Scorecard to review where you are with a core building block for AI, find out whether you’re looking good, or have some work to do to get started

,

AI – Assess your foundations

Nick Poole

,

AI is 100% For Tech Start-ups Only – Right? Actually, Wrong

Nick Poole

,

Product Discovery Workshop: Not just gathering requirements, but validating a business, collaboratively

What do we do?

At Elemental Concept, we build software for start-ups to multi-national corporations. These days, multi-nationals are trying to transform their own business models before a rapidly moving start-up takes their customers, so there is an overall need across all types of organisations to get the right solutions to market quickly. The methods I have outlined in this article have worked well for both start-ups and large organisations.

Through ideation, requirements gathering and building systems, we adopt lean methods. In other words, we are not wasteful – we strive not to waste client’s time or money. We encourage failing fast and knowing when to pivot whilst validating our learning with cheap experiments.

This article summarises our continually evolving requirements gathering approach, which we call PRODUCT DISCOVERY. This is a workshop, which lasts between 1-5 days. During the workshop, we employ business, technology and design techniques to ensure a strong understanding of the client’s needs.

This will be a useful read for anyone who needs to effectively gather requirements for any type of project and is into lean and design techniques. Throughout my career, I have initiated, managed and been the client for many projects and this is the most effective, enjoyable and least wasteful process I have ever used.

The birth of Agile

For some context, I’ll give a brief explanation of why Agile software development methods emerged. Over the past few decades, there have been a plethora of miserable IT project failures, many of them managed using traditional Waterfall methods (think detailed requirements gathering up front with monolithic business requirements documents).

The pace of change in today’s world of business and technology transformation lends itself to a less rigid methodology that welcomes changing requirements through the project life-cycle. History has shown that the larger and more complex the project, the riskier it is to use Waterfall project management.

In 2013, the calamitous release of the Healthcare.gov website (development costs in excess of $174m) exposed one of the most epic Waterfall fails leading to the emergence and bedding in of Agile methods.

Prepare for the Workshop

We prepare by researching the relevant business environment, market trends and dominant competitors, which help us to validate the business idea ahead of the workshop. By the time the customer arrives for the workshop, the wall of our office is covered with material to facilitate and provoke discussion. We’ll encourage workshop participants not to stay seated at the table writing into their notebooks. Instead, if they have something to write down, they write it on a post-it and stick it on the wall – everyone can see it when they want and there is a shared understanding created.

Several internal staff participate in the workshop to bring a mix of skills and experience creating different perspectives in the room. We have found that this helps to generate more rounded ideas – the best ideas and challenges can come from anyone, senior or junior.

Do the Workshop

Assessing the Business model

We start by assessing the client’s business model. To do this, we use a slightly modified business model canvas, which was introduced by Alexander Osterwalder in 2008. The canvas is a tool that enables the people in the room to understand the big picture of a business model. We stick post-its into each canvas section to highlight the most important business model characteristics. The canvas becomes a tangible and persistent object to which we reference for the remainder of the workshop and during the software build too.

We begin with the fundamental reason for creating a business by defining the problem that needs to be solved for a customer segment. To illustrate, an example problem for Dropbox might have been: “It’s time-consuming and difficult to access and edit files across multiple devices”. The unique value proposition (UVP) describes the differentiator and could be a marketing tagline for your business, e.g., “data is securely backed up/sync’d automatically to multiple devices with no human intervention”. Other examples of the UVP might be a business model that is heavily customised to one customer type, offers services at low prices or performs a function quicker than the competition. The metrics section highlights a few actionable metrics that will be used to measure success. The following populated canvas is an example from a recent workshop.

The Magic of Story Boards

Once the canvas has been filled in, we move onto storyboarding. Storyboarding is about imagining a future where your product or service exists and then telling detailed stories about how people will use it. This technique helps to understand exactly how the software will work and is much better than an abstract description. Story boarding takes place in two distinct phases; persona mapping and user journeys.

Persona Mapping

To understand the end users of an application, we map out their characteristics using the ‘empathy map’ tool. We populate all the areas shown on the empathy map and end up with a very clear picture of who this person is, what they think, what their goals are, their frustrations, etc.

We usually try to identify someone known to the client to bring more clarity to the stories for the user journey section.

User Journeys

A User Journey is not a generalisation of something happening; it’s very specific. We detail how the future state product or service will be used by the personas, generating specific scenarios using the Job Story notation.

This is an example story board below. Whilst such specific user journey details in themselves are not relevant, during such storytelling we have again and again seen that valuable insights and details get revealed that are otherwise lost in generalisations.

To extract more clarity from storyboarding, we draw any screens on a flip-chart – it’s quite difficult to nail down specifically what information you need on a screen. Again, this moves us away from generalisations and abstractions.

Story Mapping

In the next section of the workshop, we build the story map, derived from the user journeys. The story map arranges user stories into a model to help understand the functionality of the application, identify new requirements, and effectively plan releases that deliver value to users and the business.

We arrange all the features and functions in columns with the high level functional area at the top of each column. Now, the client prioritises each column, so that the most important stories are at the top of each column decreasing in importance as you go down.

The Minimum Viable Product (MVP)

Airbnb, Snapchat, Uber, Spotify, Dropbox and the other unicorns out there all started with a good business concept that was validated in its simplest form, the minimum viable product (MVP). This was then pushed in to the mega product that you know today.

We always strive to build the MVP, which is used as a tool to collect user feedback. That feedback is used to improve the product and validate the concept. Based on the feedback, the client can rapidly pivot, persevere and scale, or dismiss the project altogether. Once the story map has been prioritised, we identify the features for the MVP, endeavouring to keep this as small as possible, so it’s kept quick and simple.

Prototyping

At this point, instead of proceeding with the build, we like to add a further validation step. We build a prototype to make sure that potential end users want to use this product and the user flow is simple and intuitive. The prototype is high fidelity, so if it’s a mobile app, the prototype looks like the real thing; it runs on a phone, it’s interactive, but no software has been developed. The data is all faked rather than being delivered and processed via a web server or API, so it’s very quick to build (~1 week).

Once the prototype is built, we run a user interview session, where a UX designer observes and interviews users whilst they use the prototype. The feedback is used to refine the prototype and now, we are ready to start the Agile development phase.

Why we love it

Every time we have run this workshop, we see the client go on a journey. They understandably arrive with pre-defined ideas and it can be hard to let these go. Through this process, we encourage leaving preconceptions behind to generate fresh ideas. The client stakeholders walk in usually knowing what they want, but they walk out knowing what they need, as some of the features that the customer wants are usually unnecessary for the MVP, which is a learning tool. The workshop gives ownership to the people in the room – we want to deliver the product and we believe in it.

Please contact us at Elemental Concept ([email protected]) if you would like to know more about our product discovery or indeed if you would like to try it out for yourselves.

,

The bedrock of a healthy work culture

Let’s build a great culture

A healthy culture and work environment leads to happy and productive employees; everyone wants to come to work because they are having fun and are mentally challenged at the same time. I’ve experienced different company cultures during my working life and have developed a strong preference. Now, as the co-founder of a technology start-up (http://www.elementalconcept.com), we have the exciting opportunity to build an organisational culture from the ground up. The bonds, loyalty and trust between people is the most important thing here. We are all friends, because we have made an effort to get to know each other and we like each other! This article describes some of the foundation principles of our culture and why we have chosen them.

Diversity breeds tolerance

For any organisation, recruitment is one of the prominent areas of focus when building the right culture and we are no different in that respect. We’ve been lucky to kick off with a small team that we know very well from a previous company, so have only needed to hire into one position so far. We recruit for attitude, as well as skills; the ability to empathise and trustworthiness are what we look for in our people.

When recruiting, the cultural fit is key. Saying that, we don’t look for one personality type only; diversity is important and assembling different kinds of thinkers tends to generate the best ideas and solutions. In our case, out of 12 people so far, the diversity is amplified by the number of different nationalities we have in the office; Belgian, Corsican, English, Irish, Italian, Latvian, Pakistani and Russian!

When it comes to creative inspiration, job titles and hierarchy are meaningless, so we try to avoid it wherever possible. We strive not to constrain people by putting them in strict boxes or roles. We want our people to shine in areas in which they are strong, then offer them the opportunity to explore new areas. In this way, we have an office full of teachers.

Autonomous, Cross-Functional Teams

In a modern organisation, the team is the unit to be assessed, rather than the individual; the way the people in a team interact with one another is the key to success and the best teams are made up of people who complement each other.

“There will be no stars in this team, we need everybody – everybody has a valid perspective” Frank Maslin

Autonomous, cross-functional teams are a bedrock of Agile methods. Populating a project team with people who can cover all areas in the project minimises waste and broadens individual skillsets. For example, a cross-functional team can be made up of developers, network engineers and UX/UI designers. This is a departure from more traditional management methods, which espouse functional teams and top-down communication. We augment the typical command and control style management with bottom-up communication; the managers loosen their control and accept risk.

For a cross-functional team to work well, each team member needs to become genuinely interested in the other team members, including the skills they have. In this way, everyone becomes a teacher and a student, which generates motivation, pride and enthusiasm. As a matter of principle, we don’t stand our ground for the sake of it and our guys spend time to understand the other person’s point of view. The alternative is that you become entrenched in your own point of you if something doesn’t align with your ideas, which is a close-minded way to operate.

“If there is one secret of success, it lies in the ability to get the other person’s point of view and see things from that person’s angle as well as your own” Henry Ford

Freedom to Fail

We are problem solvers; it’s what we do day in and day out. The first step in solving problems is, of course, to uncover the problems that exist. We delve deeply into our client’s companies to understand everything that drives them. By engaging with them, through collaboration and discussion, we draw out the fundamental issues facing them and strategise the best solution to these going forward. Our team are driven by solutions and adding value. We strive to find these crux points so we can help our clients progress.

In solving problems, we give our people the room to experiment. They are free to try different things, run experiments and very importantly not to be afraid to fail. We expect problems and empower staff to solve them on their own. Most importantly, we respond well to failure. Ideas, solutions, systems will fail often – what doesn’t work out is merely a learning experience and fodder for the innovation cycle.

Get in touch

As our organisation grows, we have a challenge to maintain the culture I have described. I would love to hear from anyone who has had experience in maintaining a family-style culture in a growing organisation or has any views on the delicate balance between top-down and bottom-up communication.

,

Aligning your AI Strategy to your business values with the Sustainable AI Framework (SAIF)

Artificial Intelligence (AI) continues to infiltrate various aspects of daily life. While some are sceptical of its potential, others believe AI can provide faster and effective approaches for addressing a wide range of problems from policing and crime prevention to personalised healthcare or streamlining the judiciary system. This is a belief which is likely motivated by the success of AI in industries such as the retail and the manufacturing industries. It is, therefore, not surprising that many organisations, including public institutions, are trying to adopt AI and related technologies in some ways or form.

Unfortunately, adopting a technology that has as much potential as AI generally requires many more inputs than the technology itself. In other words, recruiting data related professionals/experts and having them develop AI systems is unlikely to be good enough to meet all objectives.

We observed similar trends with the advent of the Internet.

As use of the Internet has increased throughout the 21st Century, users have been facing abusive practices such as unwanted commercial emails (spam), identity theft, and more recently user tracking. Experts have responded to some of these by providing technical solutions to mitigate these challenges, while regulators and government have stepped in to prohibit certain practices and in some cases now demand that organisations inform their users of their practices in advance. Importantly, we all expect organisations that we interact with online to mitigate our exposure to abusive practices. This means that from an organisation’s perspective, simply creating an online presence is probably not the biggest challenge. The real complexity lies in understanding the following questions: Are all potential challenges addressed adequately?, what is the potential return on investment in building an online presence?, and what is the impact of the online presence on the organisation’s ability to create value in short, medium and long term. In conclusion, organisations need to manage the risks around their online presence.

Investing in AI is not so different: Successfully developing AI systems that meet the constantly changing needs of today’s society requires a holistic methodical and adaptive approach designed to assess and manage the risks around AI. Such an approach should ensure that AI systems developed by an organisation are aligned with its way of conducting business and meet certain social standards. The importance of this is corroborated by the increasing importance of principles such as privacy, fairness and social equality in discussions around AI. For an organisation, failing to meet those standards can give rise to lost opportunities. Worse yet, it may even lead to an organisation’s demise, as the example of Cambridge Analytica demonstrates.

Our sustainable AI framework (SAIF) is designed to help decision makers such as policy makers, boards, C-suites, and managers and data scientists create AI systems that meet business and social principles. By focussing on four pillars related to the socio-economic and political impact of AI, SAIF creates an environment through which an organisation learns to understand its risk and exposure to any undesired consequences of AI, and the impact of AI on its ability to create value in short, medium, and long term.

Do you want your business’ AI systems to reflect your organisation’s vision and way of conducting business? SAIF is the framework you need to materialise that objective.

,

What is Artificial Intelligence?

A comprehensive definition of AI is indispensable for the development and deployment of AI systems that meet the ever changing dynamics of today’s society. This short article attempts to develop such a definition.

Jhon MacCarthy, one of the founding fathers of the Artificial Intelligence (AI) discipline, defined AI in 1955 as

“[…] the science and engineering of making intelligent machines, especially intelligent computer programs […]” [1]

where

“intelligence is the computational part of the ability to achieve goals in the world” [1].

MacCarthy further provided an alternative definition or interpretation of AI as

“[…] making a machine behave in ways that would be called intelligent if a human were so behaving […]” [1].

Over the years, and especially within the business world, the definition or interpretation of the term AI has evolved or has been altered to incorporate development and progress made within the discipline. For example, Accenture in its Boost Your AIQ report defines AI as

“[…] a constellation of technologies that extend human capabilities by sensing, comprehending, acting and learning – allowing people to do much more” [2].

Likewise, PWC’s “Bot.Me: A Revolutionary Partnership” Consumer Intelligence Series report defines AI as

“[…] technologies emerging today that can understand, learn, and then act based on that information” [3].

Other definitions exist: For example, the Oxford dictionary defines AI as

“[…] the theory and development of computer systems able to perform tasks normally requiring human intelligence such as visual perception, speech recognition, decision making, and translation between languages” [4].

A common theme from these definitions is the emphasis on “human-like” characteristics and behaviours requiring a certain degree of autonomy such as learning, understanding, sensing, and acting. They, however, do not provide a framework to underpin such behaviours. This is problematic because humans, whose behaviour AI systems or agents are supposed to mimic or in certain cases act on behalf of, behave according to a number of principles and standards such as social norms. Social norms can be argued to provide a framework to navigate among all the behaviours that are possible in any given situation. They introduce the notion of acceptable behaviour, because they determine the behaviours that “others (as a group, as a community, as a society …) think are the correct ones for one reason or another” [5]. As a result, socially accepted behaviour is central to how we act in a given context. This suggests that the definitions of AI presented above are somewhat incomplete, because the AI agent or system has no way of determining which behaviour is acceptable among those that are possible without such an equivalent framework.

While fictional, Asimov’s three laws of robotics probably represent one of the first attempts to provide artificially intelligent systems or agents with such a framework. Attempts to create new guidelines for robots’ behaviours generally follow similar principles [6]. However, numerous arguments suggest that Asimov’s law are inadequate. This can be attributed to the complexity involved in the translation of explicitly formulated robot guidelines into a format the robots understand. In addition, explicitly formulated principles, while allowing the development of safe and compliant AI agents, may be perceived as unacceptable depending on the environment in which they operate. Consequently, a comprehensive definition of AI must also provide a flexible framework that allows AI agents or systems to operate within the accepted boundaries of the community, group, or society in which they operate. By doing this, activities of AI agents or systems designed under such a framework naturally allow other stakeholders to regulate their activities, too. As a consequence, we choose to adopt a new, extended definition: By AI, we understand any system (such as software and/or hardware) that, given an objective and some context in the form of data, performs a range of operations to provide the best course of action(s) to achieve that objective, while simultaneously maintaining certain human/business values and principles.

References

[1] http://www-formal.stanford.edu/jmc/whatisai/node1.html

[2] https://www.accenture.com/t20170614T050454__w__/us-en/_acnmedia/Accenture/next-gen-5/event-g20-yea-summit/pdfs/Accenture-Boost-Your-AIQ.pdf

[3] https://www.pwc.in/assets/pdfs/consulting/digital-enablement-advisory1/pwc-botme-booklet.pdf

[4] https://www.lexico.com/definition/artificial_intelligence

[5] Lahlou, S. (2018). Installation Theory. In Installation Theory: The Societal Construction and Regulation of Behaviour (p. 93 -174). Cambridge: Cambridge University Press.

[6] https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/