top of page

Decision making and open democracy in a future of big data and artificial intelligence

Link to this site (for PDF) https://rusk137.wixsite.com/amberruskegdf/single-post/2018/05/17/Decision-making-and-open-democracy-in-a-future-of-big-data-and-artificial-intelligence

We are living in a increasingly digitalised world; the Digital Age. There is so much potential in technological developments and breakthroughs, so how can we ensure that this technology is truly used for the benefit of humankind and open democracy? Most people are familiar with phrases such as Big Data and AI, but to summarise, Big Data is essentially the large data sets collected from our digital footprint. This data is normally analysed by artificial intelligence (AI) to reveal trends, and even predict them in some cases. AI refers to machines that are programmed in a way that allows them to perform tasks that previously humans were only capable of. These tasks could include anything from speech and face recognition to language translation.

The capabilities of AI are continually growing. Machines can now be programmed with a form of artificial intelligence that provides an intellect that can match that of a human baby, with the ability to mimic basic tasks. These advancements are the result of the rise of machine learning, where machines are being programmed in a way that allows them to ‘learn’ from their environment. “The effects of AI will be magnified in the coming decade, [as virtually all industries aim to] transform their core processes and business models to take advantage of machine learning” (Brynojolfsson, 2017). There is no doubt however, that these technologies can and have been used in ways that undermine open democracy and human wellbeing, and so the purpose of this work is to explore, question and start a conversation on what we do or don’t want our future society to be.

 

Facebook Analytica, Big data and trust in technology

2.19 billion people logged onto Facebook during the first quarter of 2018. Imagine the gargantuan footprint of personal data being immortalised on the internet as a result. But what truly is the extent of their data collection, and how should data sharing consent be presented and applied to a big data context? Recently Facebook has been fighting to regain trust after details were publicised about their involvement with Cambridge Analytica (C.A), a data analysis firm. Using Facebook as a foundation, C.A created a quiz that asked permission of Facebook users to view their data. The quiz’s AI then harvested this data, and that of their Facebook contacts and improperly stored the information of reportedly over 50 million users. It’s alleged that this data was used to influence the public during the US Presidential Elections, and Brexit. This breach of data sharing policies, and similar it’s cases like this often make the public question their trust in the use of big data. “When data is acquired as a byproduct of service provision, consent remains problematic in the big data context” (Strandburg, 2014, p.31). However there is a fine line between providing transparency in regards to data usage for obtaining consent and providing too much information that makes it difficult for the user to digest.

Post on Facebook privacy settings by Mark Zuckerberg, and response from Matt Navarra (2018)

Source: Mark Zuckerberg (2018) www.facebook.com/zuck

Permission to share from Matt Navarra (2018) //twitter.com/mattnavarra

The ethics of AI and big data: Deep Mind

“Where these data commit to record details about human behaviour, they have been perceived as a threat to fundamental values” (Bacoras and Nissenbaum 2015).

Perhaps much of societies mistrust of artificial intelligence is the result of often the way it is, or rather isn’t explained. There is seemingly a grey area in the way that many organisations use AI, and on what grounds they make decisions or use data. The technology company Deep Mind’s Ethics and Society is an on-going research unit that uses the companies inter-disciplinary knowledge, insights and research to set benchmarks that can be used as guidelines so the application, governance and uses of AI and big data are ethical and used to benefit society. “If AI technologies are to serve society, they must be shaped by society’s priorities and concerns. This isn’t a quest for closed solutions but rather an attempt to scrutinise and help design collective responses to the future impacts of AI technologies” (Harding and Legassick, Deep Mind, 2017). They open up important conversations about the use of technology in an attempt to ensure ethical values while making sure that digital democracy is not used in ways that undermine minorities. I believe the perhaps the biggest value of Deep Mind is their transparency of ethics, contextualisation of data flow and usage and disruption of the current research landscape in AI.

Deep mind (2018) Source: https://deepmind.com

Democracy innovators, empowering citizens with AI: Citizen’s foundation

So how can AI be used to benefit society? Many recent articles have spoken about how the digital age could break democracy through its potential to misuse big data, resulting in the influence of our behaviour. However “The tools used to mislead and misinform could equally be repurposed to support democracy” (Polonski, 2017).

Created in the wake of the 2008 Icelandic economic crisis, Citizen’s Foundation aimed to open up social and political discussion amongst citizens, giving them opportunities to take part in local decision making, giving them the knowledge needed to be able to have a more active and responsible relationship with their society. Their system Your Priorities is a platform that allows citizens to submit issues they find in their local area. Artificial Intelligence then sorts through these submissions and brings those that are most requested and in need of solutions to the forefront. In creating a participatory democracy Iceland could re-build on the trust of the citizens, and make better informed decisions based on their voices.

Citizen's Foundation, Your priority (2008)

 

The future of decision making in a 2030+: Citizen-Centric Hyper eGovernment

Technological advancements in fields such as artificial intelligence and machine learning will undoubtably shape our future society. However “[technologies] impact on society - and on all our lives - is not something that should be left to chance” (Harding et al 2017). Therefore it’s important to question how technology may be used in scenarios such as in a hyper eGovernment in 2030+, as a means of ensuring its use in these potential futures is for the good of society. Despite the increasing capabilities of machines, it’s important that society remains human-centric so that supporting technology is governed and iterated to maintain current standards. This is reflected in this scenario, where big data and AI is prominent and its key purpose is to improve the wellbeing of society and citizens.

As part of a group our preliminary research and analysis began to highlight various weak signals that would shape and influence our further work. This would be accomplished through various experiments, provocations, and participatory design workshops. Particular areas we hoped to explore and analyse included: what people’s perceptions of AI and big data were, how much people do or do not trust technology and the potential of machine learning. How would these factors impact potential technological progress regarding AI being used to promote democracy and facilitate public participation in decision making to improve society?

Forecasting The Future of AI in decision making

This body of research and experimentation aims to provide insight into how artificial intelligence and technology can be used in a citizen-centric future Hyper e-Government as a means of creating enabling open democracy through citizen empowerment, enlightenment, and trust. This future is digital, so it’s important to consider how what may be used and governed in the future to ensure technology is used for societal benefit and not in ways that breach human rights, or trust. There are already weak signals in the use of technology that should be addressed.

Weak signals

The way in which current companies such as Facebook are being scrutinised for their data scandals show that changes need to be made and considered in the future development and governance of technology. Transparency is needed, so this common mistrust of technologies that could have a hugely beneficial impact are not prevented due to lack of trust and the public’s unwillingness to share data. Companies such as DeepMind are truly holding a torch for how ethics could be applied to the development of technology, but how can this ideology be applied to future AI? How can we ensure that citizen empowerment is at the centre of technological development? “There are decisions that can be made by analysis. These are the best kind of decisions. They are fact-based decisions that overrule the hierarchy. Unfortunately there’s this whole other set of decisions you can’t boil down to a math problem” (Bezos, 2017). In addition to this, I question the success of yes/no decision making, and that regardless of unanimous agreements being made, conversation should continue to ensure equality in a democratic way.

 

Experiment 1: Public decision making accompanied by artificial intelligence / Project good .1

Purpose: To imagine how decision making would work in future government and discover people’s opinions on data sharing and public decision making. What if the future of decision making is AI based?

Prototype: Our first prototype was created for a presentation at LCC. The initial concept was to prototype a website that members of the future public could submit projects for their local area. Other locals would then be allowed to vote on whether the project would go ahead or not. This concept aimed to encourage citizens to participate in local decision making. In the future scenario, the platform would use artificial intelligence to analyse each citizen using big data collected as a result of the eGovernment’s hyper-connectivity. The citizen would then receive a personalised report on how the proposed project would effect them personally.

We encouraged our presentation’s audience to imagine they were particular stakeholders within our presented scenario. The particular scenario we used as an example in this presentation was of a boy named Jacob who wanted to have a skatepark built in his local area. We provided several members of our audience with persona cards that gave them each a role as a different actor within this hypothesised proposal. After the participants received an analysis of how the skatepark would effect their given persona, they were asked to vote on the fate of the project.

Findings and iterations: The feedback we received from this presentation suggested that we should further explain how artificial intelligence would be used, and what data would be needed to realise a concept like this. It also became apparent that in future prototypes we would need to explore other examples of community problems, rather than that of a Skatepark which was not particularly successful as a provocation to gain insights and feedback to further this ideas development. The most predominant feedback we received echoed a previous weak signal that had become apparent; that the use of technology should be explicitly explained and its use of data should be optional and transparent.

Experiment 2: Public decision making accompanied by artificial intelligence / Project good .2

Purpose: To explore people trust of artificial intelligence and opinions on data sharing.

Prototype: The second prototype was an iterated version of the first, that would be taken to Central Saint Martens (CSM) for a workshop we would host for various members of Camden Council, GDF, and other related specialists. We wanted to also explore what amount of trust people have in technology, and big data usage. Included in this prototype was a more explicitly shown and controllable way for citizens to choose what data they share on the platform.

Again, we encouraged the workshop participants to act out various personas and vote on the solution. In response to previous feedback we began to further consider how artificial intelligence would be implemented in a system like this, but at this point we had no solid concept regarding this. As a result we also used this workshop as a means of starting discussion with those attending to gain insight and opinions from their various backgrounds.

Findings and iterations: The inclusion of privacy settings in this second prototype appeared to resolve some of our previous issues. It was suggested that we should look at some of the Camden 2025 aims and create relevant examples for our future prototypes. In addition to this, we were encouraged to explore how our platform could provide a wider range of solutions to issues to maximise its success, benefits and consider the effects different stakeholders. The black and white nature of our platform basing the fate of each project on either yes or no votes could also be developed. There is the potential for discussion and opportunities for different voices and opinions to be heard to create a basis which would allow for more educated votes.

Scenario and Persona cube created for workshops and presentations:

Link to video (for reference in offline versions of this blog): https://youtu.be/A87GKtOaDDM

Experiment 3: Public decision making accompanied by artificial intelligence / Sherlock

Purpose: To explore the potential of aiding public decision making though the use of artificial intelligence provided information, and to explore alternatives to or developments of our voting system.

Prototype: In response to the feedback received from the CSM workshop, we researched the Camden 2025 aims and began to create a new prototype of our system which used relevant cases. It had been previously suggested that we consider a ‘wicked challenge’ to base and test our system on, so we decided to base our development and research on a Camden housing problem. In Camden there is already a need for more housing, and this need will only be exaggerated in the future. The wicked nature of the challenge is the result of how this issue would plausibly be solved, through building on green spaces. The obvious issue with this is that the residents of Camden want to keep these green spaces, but additionally want more housing.

This final prototype named Sherlock is a platform that aims to improve democracy and society by opening decision making up to the public. Supported by embedded artificial intelligence that has the ability to analyse historic and current data, in an effort to present potential future solutions to avoid problems before they happen. In addition to this, Sherlock would provide information as a means of enlightening citizens in their decision making, giving them a voice in their local communities, while presenting the viewpoints of minorities in a way of promoting equality and fairness while complying to open digital democracy. This reflects the ideology of Camden.

Sherlock also implements a two round voting system, so after the preliminary round of voting there can be discussion, debate and reflection during a specified amount of time, and after that period there would be the final vote. This allows for public participation and conversation in decision making, empowering residents, encouraging responsibility, and strengthening relations while maintaining democracy.

Sherlock summary video for users and stakeholders:

Link to video (for reference in offline versions of this blog): https://www.youtube.com/watch?time_continue=20&v=XdMVNV9vNEg

 

As an additional way of explaining to stakeholders and workshop participants how AI would be used, we created a simple prototype that showed how different individual citizen data sets (or perspex slides in this example) would be processed in a way that presents a unique code (or pattern) as an amalgamation of the data. This unique code would then be processed and compared to those created from other case studies . Considering the similarities between these case studies, and through assessing their success rates, this kind of system is capable of presenting personalised analysis for citizens.

Data processing prototype:

Link to video (for reference in offline versions of this blog): https://youtu.be/os4SOPESRvo

Lego service map of Sherlock created to clarify information flow, and the user journey

Future trend forecast - Technology, Decision making and open democracy

From my prior research, analysis and experimentation; patterns had begun to appear in the form of various recurring weak signals from the present day, through to potential problems and needs that could arise in a hyper-connected technological future. As a result my work aimed to transform these challenges and barriers into opportunities by creating solutions that would hopefully maximise the benefits of technological advancements in a future world with the aim of empowering the public, citizen decision making, and open democracy. By amalgamating this work, I have created a forecast of 5 trends for the future of government 2030+.

Trend 1 / Artificial intelligence to predict and prevent problems

Artificial intelligence already has the ability to use calculations to crunch through data to reveal predictions, so it is not hard to imagine a future in which these capabilities are implemented into how society and government works. Through my experimentation and research I began to consider how this type of technology could be applied to decision making surrounding local problems. Taking this further into the future scenario, what says that the AI itself can’t predict future problems and present various solutions that can prevent the issue from arising at all. Big data and historical data in general could potentially be a goldmine if innovation and research continue to develop the ways in which it can be analysed. “Big data is not about trying to “teach” a computer to “think” like a human. Instead, it’s about applying math to huge quantities of data in order to infer probabilities” (Schonberger, P11).

Trend 2 / Transparent data acquisition and usage

A few questions I was consistently asked throughout my experimentation and prototyping for this project included: “Where is the data coming from for your system?” “Who would see the data?” “What data is needed?”

Public awareness and understanding of data is increasing. Despite this there are still grey areas, and combined with public data scandals such that of Facebook and Cambridge Analytica, there is often a lack of trust in data usage and acquisition. To continue developing and wreak the benefits of big data, public trust must be built through transparency. My findings suggest that this could be achieved through being transparent with how data is used, the data flow, and how data would be acquired.

Trend 3 / Technology as a tool for citizen decision making and empowerment

Looking back at my final prototype and the ‘wicked challenge’ it presented, this scenario truly showed the benefits of using AI as a tool to help improve decision making. In complicated situations like this, AI can provide non-biased information on the impact of each potential outcome from various different perspectives. This provides citizens with knowledge and insights that can help them make well informed choices. They would be able considering the impacts on various other people that they may not have taken into consideration, allowing minorities to be heard.

Trend 4 / New standards of decision making

The first prototypes I created used a yes/no form of polling for decision making, but there were flaws. Often the fate of the problems and projects presented on the platform could not simply be decided with a yes or no. Instead a two part poll would be implemented as a way of providing time to reflect and join conversations on the subject to widen their understanding after the preliminary vote. The second and potentially final vote would then be taken after a predetermined amount of time, and a unanimous decision would decide on the results. Perhaps provisions like this could be implemented in future government and societal decision making.

Trend 5 / Preventing misuse of AI that could hinder open democracy

Not only should data usage and flow be transparent to citizens, but it should explicitly be prevented from wrongfully manipulating citizens decisions. Referring again to the Cambridge Analytica scandal, that reportedly used data to influence elections. This only damages trust in technology that has the potential to promote and improve open democracy.

 

So how can we ensure that future technology is truly used for the benefit of humankind and open democracy? Through my research, analysis of weak signals and discovery of trends, I believe that a combination of the following factors can achieve this. Human wellbeing and privacy should be at the forefront of technological development. Without either, citizen trust will be broken. AI and big data should only be used for the benefit of societal innovation. This can include providing information and insights of various people to help people make informed decisions, but should exclude any form of manipulation. Data flow and usage should be contextualised and explained in an understandable manner to improve citizens trust in these systems and allow the further development of them and a greater willingness to embrace and explore future technologies and uses of big data. While my work throughout this project has been based in a future hyperconnected human-centric scenario, I believe that concepts and provocations such as these should be considered in our present day, questioning “what if”, creating disruption that can spark innovation and inspire social improvement.

“We should all be concerned about the future because we will have to spend the rest of our lives there” (Charles E Kettering, 1949).

 

Sources:

Biddle, S. and Biddle, S. (2018). Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document. [online] The Intercept. Available at: https://theintercept.com/2018/04/13/facebook-advertising-data-artificial-intelligence-ai/ [Accessed 14 May 2018].

DeepMind. (2017). Why we launched DeepMind Ethics & Society | DeepMind. [online] Available at: https://deepmind.com/blog/why-we-launched-deepmind-ethics-society/ [Accessed 17 May 2018].

Digitalsocial.eu. (2018). Digitalsocial.eu. [online] Available at: https://digitalsocial.eu [Accessed 6 May 2018].

Dirk Helbing, A. and Dirk Helbing, A. (2017). Will Democracy Survive Big Data and Artificial Intelligence?. [online] Scientific American. Available at: https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/ [Accessed 10 May 2018].

Econsultancy. (2017). What's the difference between AI-powered personalisation and more basic segmentation?. [online] Available at: https://econsultancy.com/blog/69112-what-s-the-difference-between-ai-powered-personalisation-and-more-basic-segmentation [Accessed 17 May 2018].

Government.nl. (2017). Citizen participation. [online] Available at: https://www.government.nl/topics/active-citizens/citizen-participation [Accessed 12 May 2018].

Harvard Business Review. (2016). The Business of Artificial Intelligence. [online] Available at: https://hbr.org/cover-story/2017/07/the-business-of-artificial-intelligence [Accessed 17 May 2018].

Kaner, S. (2015). Facilitator's guide to participatory decision-making. Vancouver, B.C.: Langara College.

Lane, J. (2015). Privacy, big data, and the public good. New York, NY: Cambridge University Press. P.31

Lucey, B. and Lucey, B. (2018). Personalization & machine learning in 2018: From comms to content - MarTech Today. [online] MarTech Today. Available at: https://martechtoday.com/personalization-machine-learning-2018-comms-content-208937 [Accessed 10 May 2018].

Nccmt.ca. (2008). Engaging citizens for decision making | Resource Details | National Collaborating Centre for Methods and Tools. [online] Available at: http://www.nccmt.ca/knowledge-repositories/search/86 [Accessed 15 May 2018].

Mayer-Schonberger, V. and Cukier, K. (2014). Big data by Viktor Mayer-Schonberger and Kenneth Cukier. [Hamilton, N.Z.]: summaries.Com.

Medium. (2017). Machine Learning for Decision Making – Teconomics – Medium. [online] Available at: https://medium.com/teconomics-blog/machine-learning-for-decision-making-e776f9f8917e [Accessed 12 May 2018].

Oii.ox.ac.uk. (2017). Artificial intelligence can save democracy, unless it destroys it first — Oxford Internet Institute. [online] Available at: https://www.oii.ox.ac.uk/blog/artificial-intelligence-can-save-democracy-unless-it-destroys-it-first/ [Accessed 14 May 2018].

openDemocracy. (2017). Brexit wrecks it: the theory of collective decision making. [online] Available at: https://www.opendemocracy.net/wfd/peter-emerson/brexit-wrecks-it-theory-of-collective-decision-making [Accessed 4 May 2018].

Polonski, V. (2017). The use of AI in politics is not going away anytime soon. [online] The Independent. Available at: https://www.independent.co.uk/news/long_reads/artificial-intelligence-democracy-elections-trump-brexit-clinton-a7883911.html [Accessed 3 May 2018].

Reykjavik, B., Policy, R., Scotland, A., Neighbourhood, M., Iceland, E., Iceland, B., Estonia, R., Nazzjonali, C., democracy?, C., Youth, D., Government, D., Notifications, C., Challenge, O. and petitions, N. (2008). Digital Democracy Home - Citizens Foundation. [online] Citizens Foundation. Available at: https://www.citizens.is [Accessed 13 May 2018].

Schmidt, E. and Cohen, J. (2013). The new digital age. New York: Alfred A. Knopf.

TechCrunch. (2018). Deep learning with synthetic data will democratize the tech industry. [online] Available at: https://techcrunch.com/2018/05/11/deep-learning-with-synthetic-data-will-democratize-the-tech-industry/ [Accessed 6 May 2018].

The Nation. (2018). Democracy Needs a Reboot for the Age of Artificial Intelligence. [online] Available at: https://www.thenation.com/article/democracy-needs-a-reboot-for-the-age-of-artificial-intelligence/ [Accessed 5 May 2017].

Thersa.org. (2018). The digital city: the next wave of open democracy? - RSA. [online] Available at: https://www.thersa.org/discover/publications-and-articles/rsa-blogs/2017/09/the-digital-city-the-next-wave-of-open-democracy [Accessed 6 May 2017].


Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • LinkedIn Social Icon
  • Twitter Basic Square
  • Instagram Social Icon
  • Vimeo Social Icon
bottom of page