Future Affairs 2019 – Livestream

Future Affairs 2019 – Livestream


[Music]. Digital Revolution. What does it
mean to be human? Possible futures. [Music]. “What are our visions of
possible futures, chances and challenges? The state of
democracy. Also able to predict our
feelings and reactions.” Resetting global power politics? Welcome to Future Affairs. [Applause].>>Good morning, ladies and
gentlemen, and, yes, indeed, welcome to Future Affairs, the
first event hosted jointly by the Federal Foreign Office
and re:publica on the opportunities and challenges of digital
transformation and democracy. My name is Geraldine de Bastion,
and I have the honour of being your
stage host today. It’s my particular honour to introduce
three gentlemen for the opening ceremony of this event: his cleansesy
Carlos Alvarado Quesada, President of Costa Rica.
[Applause]. The federal Foreign Minister of
Germany, Heiko Maas. [Applause]. His Excellency Carlos Alvarado
Quesada, President of Costa Rica. [Applause]. The federal
Foreign Minister of Germany, Heiko Maas. [Applause]. … .
. .
. .
. .
. .
. .
. .
.>>Ladies and gentlemen,
President , colleagues, Ms Mr Again – Mr Gebhard.
Elections for a new European parliament were held, and we are
seeing more and more analysis on which influence
social media has taken on these elections, but the debate is held in a
different way than we would have expected a
few weeks ago. In the run-up to the elections,
the risk was posed about so-called fake news and
disinformation in social media. However, their impacts were
weaker at these elections than many would have thought. Maybe, we’ve also been more
wary? The earthquake on the internet,
the wave of first-time voters, wasn’t launched by tweets from right-wing
populists, the Russian bots, it was young
people who eman painted themselves from old analogue
politics. Who used digital media to directly communicate with their
audience. More than 80 YouTubers reaching
several millions of people showed us the
democratic potential of digitalisation. This they also showed us how it profoundly changes our
democratic co-existence. When we speak about digitalisation
today, we are talking about opportunities and risks, and we
are talking about how we can develop solutions and ideas for how we
can make our values applicable online
too. /* we have to learn new ways of
shaping policies and communicating these. Ladies and gentlemen, We have to
learn new ways of shaping policies and communicating
these. Ladies and gentlemen, it is undeniable: being a technological leader in digitalisation is a super
superpower base, a true game-changer,
because it has impacts on our bases for power. Those who have
the best access to data will control the decisive commodity
for machine learning. Those who set standards and only
patterns will be in a key position, and best placed in the competition
of superpowers. If additional breakthroughs or
quantum leaps in computing capacity will
come – will be realised, that will, again, shift the relations of powers,
and China and the US have realised this long ago. This also explains the
relentless in their struggle for digital leadership. We are
seeing confronting mind sets here. Those who focus on co-existence,
and those who demand a complete
technological isolation from the other. And this is why the world is
faced with a threat of a new division, not a military one,
like we’ve seen in the Cold War, but a technological division. The debate about introducing 5G
for us Europeans in this context is something like a true reality
check. It has shown us how close we are
today already to a world in which we will only have a choice between a an
American and a Chinese tech sphere. You can
easily imagine what such a world would look like if it really
will be a reality one day, and, at one end of the spectrum, you
will see a model that uses technology as a means of control
of preserving power, mass
surveillance and censorship, but also systems like social scoring
will be the tools of choice. Technology, therefore, will
become a true totalitarian tool. At the
other end of the spectrum is a model that he that rejects any
form of regulation as an interappearance on freedom in
the digital space. Those who develop new technologies then
draw the line. So, everything that is
technologically possible will be accept able. Between these two extremes stand
we as Europeans but many from our guests from Latin American
and Caribbean countries, I believe. We believe in the huge
positive potential of digitalisation but we also
realise the dangers to our democracy. Isolation, however, is not a
solution. Free, open societies need free,
open internet. But freedom also needs rules. Let’s be honest: in
order to set the rules and to implement them,
exercise them, you need to have influence, especially in
international politics. You need influence that none of
our countries can have on its own. So, ladies and gentlemen, the
digital revolution is the best example
of why multi-lateralism is the solution, and why nationalism
doesn’t work in a globalised world. The digital revolution cannot be
managed or controlled by individual states. Digital connectedness requires
political connectedness. We can see the first results and
successes. Since 2013, we’ve launched six resolutions on the
right to privacy in the digital age with Brazil at the United
Nations General Assembly, and the human rights council of the
United Nations. Last year, we have worked very
closely with Mexico when we needed to develop a new mandate
for the group of government experts elaborating standards
responsible for state actions on the internet. And the chairperson of that
group is Brazilian. With regard to the momentum of
digital change, we need to take one step further, and this
is why we have invited you to come to Berlin today. You count on multi-lateral
solutions like we do, but our voices are often unheard. So countries like yours,
President Alvarado, have a lot of
experiences of locations of innovative
high-deck companies and start-ups, and this is why we
have purposefully chosen Latin America as our partner region
for the first Future Affairs Conference
here in Berlin, and we’ve taken
re:publica on board. That’s Europe’s largest digital
conference, because one thing is very obvious: central issues for
the future such as climate change and digitalisation can’t
be managed without civil society, without companies, or
without NGOs. We need your creativity, your expertise, and sometimes also
your provocation in order to find the right way to deal with the digitised
world. Ladies and gentlemen, our path
needs to be a new one, one that stands
for the middle ground between the totalitarian and libertarian
ground. We’ve prepared this model ground, despite all criticism, the
General Data Protection Regulation. It has become an
international model and a de facto standard in many
countries. Most recently, even Facebook
founder, Mark Zuckerberg, made it clear
that a regulation like the Europeans did it could be in the
interests of US tech companies who wish to have a global level
playing field. In the field of AI and an EU
expert commission published guidelines at the beginning of
April, and the EU intends to reach what it has already
achieved with the General Data Protection
Regulation also in the field of artificial intelligence which
has been a strong actor in setting international standards. That means when we stand
together, we’re all but powerless. Ladies and gentlemen,
we need to be able to keep up in terms of technology, so we will do everything we can
to make sure that the EU’s the EU
systematically gears its framework to future affairs,
research and development, because innovation safeguards
influence. That’s more true than ever before in tea the digitalised world, which
best practices already exist is something I would like to talk
about today. Let me mention a few approaches we are working on
at the Foreign Office right now. One is disinformation. Already, today, we use specific algorithms and AI to better
standard better debates on social media. It helps us to react and
counteract faster. if false information is spread
on the internet like human traffickers have done in the
refugee crisis. Within the EU, we’ve established an action plan
against disinformation, and we want better to co-ordinate our
efforts. Another buzzword is “early
crisis detection.” Last year, there were reports saying that Chinese was using AI to
prepare foreign policy decisions and we are quite far away from
such a step but we have developed a platform that helps detect crises at an earlier
stage – economic data, figures, climate
data, terrorist attacks, fighting,
information about demographic developments, all this information helps us to detect
crises at an earlier stage, and that can be mortgage a worth a mint. Early-warning systems at and
and technology, we can see in Syria how this saves lives.
There, we are supporting an early-warning system for
air-strikes against civilian and humanitarian institution s with simple
sensors, and they are connected via social media. They save lives every day in
areas such as Idlib. Ladies and gentlemen, these examples alone
show the huge potential that social media has. But the debates that we’ve seen
last week also reminded us of that. There is a saying, and,
interestingly, it’s a Chinese saying, that says, “When the wind of change blows, some
build walls, and others windmills.” 0 I think I can speak on behalf
of many in this room when we say we disapprove of walls. Walls
have not been a good idea in the past, and they are of course not
a suitable solution for the future. This is why I would like
to call upon everyone to let us use the wind
of change of digitalisation. Let’s build windmills together,
and, hopefully, we can start today. Thank you so much. A very warm welcome to all of
you. Thank you. [Applause]. of course not a suitable
solution for the future. This is why I would like to call upon
everyone to let us use the wind of change of digitalisation.
Let’s build windmills together, and, hopefully, we can start
today. Thank you so much. A very warm welcome to all of you.
Thank you. [Applause]. >>Ladies and gentlemen, please
welcome the President of Costa Rica on
stage, Carlos Alvarado Quesada. [Music].>>Minister Maas, ministers from
Latin America and the Caribbean,
friends that host us here in Berlin, for me,
it’s a particular pressure to be able to address this audience, and what
I’ve been trying to do, and what I will try to do in the next
couple of minutes is to share some thoughts regarding the
digitalisation, but also the challenges that are
per received in the last – perceived
in the last months and the context in
which they are faced. I want to quote the futurologist
on what we are saying, saying that the future is not here, it’s just
not evenly distributed. I would like to underline the
part of distribution. Because we talk a lot about
being prepared for the future, and working on that, but the changes that our
world experiences are already here. But there are different
realities that co-exist in space and time, and
it has a lot to do with inequality at the
end. I would like to talk a little
bit about it. Let me put this as an example. Those are several of the German
firms that have an operation in Costa
Rica. Those firms are part of a larger cluster of several areas –
technology, medical appliances,
pharmaceuticals, and so on, logistics as well – and
those operate in Costa Rica within a
larger cluster. This cluster also is linked with
many of the leading companies in
technology from edge technology, to cloud
technology, to data analysis. But we have quite a vibrant
cluster developing what’s reaching the
market, and changing the way we perceive
things, and making growth happen, and there
is a very specific part of Costa
Ricans that benefit from it, because they are researchers,
they are employees. They work within this cluster,
in the very, very edge, let’s call it
of those developments. Those people are the
multi-lingual, those workers, Costa Rican
workers are educated. They have formal jobs. They have higher degrees of
welfare, well being. And the and same, these being
very, very relevant for our economy, our growth, our future, the same
time, we co-exist at the same place,
space, and time, with 20 per cent of poverty still. And six per cent of extreme
poverty. People with low schooling, and
not necessarily with all the opportunities. This is to mention how we are
producing now. And one of the greatest
challenges, currently, for example, our unemployment, it’s
not explained because of the lack of job opportunities
– actually, this cluster is
demanding more labour force – but what explains
our unemployment currently, which is
at ten per cent, and it’s high for us, it’s explained because
of skills, and education, and how we get to
give the opportunities to many people, to be part of this cluster or other
successful clusters and tourism, or
clusters such as the one of agritech, adding
value to agriculture. One of my first things is we
have two realities co-exist ing, this in
technology, these in labour opportunities, but these
realities express as well in other fields, and one of the fields where you can show
the results of this is also of how
people express through democracies,
because inequality is also expressing to
how people vote, and how they
perceive reality. I think with this, I’m putting a scenario, let’s call it at this
time great, but there are great opportunities as well. [German translation.
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. . [No English feed]. .
. .
. .
. .
. .
. .
. .
. … renewable. That means we do not rely for
our electricity in fossil fuels. This is part of how we made it
it. [Applause]. This is how we’ve achieved that.
Obviously, this plant is digitalised to make a better performance of how
it works. But now, one of the potentials with many of the partners, some up
here that we have discussed in Germany, is that with this, we
can also have the potential to produce green
hydrogen. That means hydrogen that we can
produce from clean renewable sources. It can be used in
different industries. That’s one of the steps we are working on. By the way, this is also a
public hydroelectric, that has worked
from the Costa Rican government, in that we are aiming to have
public-private alliances to look for new developments. Why am I
mentioning this? Because these are also possibilities of how we can spread the benefits
of digitalisation, of jobs related to climate change, and
how we address it, and the possibilities we are
facing, and we have a head. If I have to think about
digitalisation, climate change, and what we are
facing now, as societies, I think we
have to link it with inequality. In those worlds that co-exist,
there are many who are included in
digitalisation, education, in job opportunity, in a few kilometres
from here, there are many who are in extreme poverty. For me, it’s clear why
democracies are having these extreme debates,
because, for one part of those population
s it’s evident how, for example, climate change is the threat for the whole of
the humanity. There is a vibrant movement, lots of young people
moving in that direction. Also, as a government, we just
launched in February a national de
carbonisation plan, so, in 2050 Costa Rica
eliminates completely the use of fossil
fuels not only in transportation but in agriculture, and in freight. From one part of society,
that’s very relevant. From another part of society, those
concerns are in tangible. Those do not address their more
close needs. And, on the contrary, working on
that is being far away from their
needs. When I was coming here, I
received some criticism regarding that
decarbonisation, and gay rights, they told me, it’s the most, the
highest concerns of our government. They should be
concerned about employment. So are those not connected at
all? We should take the debate also
on climate change and in digitalisation to the field of opportunities.
Addressing inequality. The question is also how? Because in Costa Rica we have a
long way to go, but there are several things that have helped
us to address this. For example, 70 years ago, we abolished the
army. We have no armed forces whatsoever. Those resources we use in
education. Perhaps education for the whole
of the population with the high-quality standard for
everybody, not for some people. It’s one of the key elements
that could help people be part of the
fourth industrial revolution, and also be concerned in the whereabouts of
the problem of climate change represents. So, instead of expending lots of billions of euros, and dollars,
and yens in military spending, we should be educating more
people to make a more inclusive society. [Applause]. Also, addressing the innovation:
to address climate change. There is a lot to be done there, but
there are lots of job opportunities, and we can use artificial
intelligence, blockchain. We can use data analysis. We can
improve our technologies and our innovation to do it, and there
are lots of possibilities there also for inclusion. And I do
believe that is possible, but that requires courage and
action. There’s one example from history
that I love to quote, and I think here
is a great place to quote it. I read it past December while
trying to have some vacation. I was reading history of your
marvellous country, of Germany, and in one passage, it mentioned that,
in the 1920s, there was a young mayor
in Cologne, and, at the time in 1920s, he stated that the peace and future
of the communities in the rim of
Germany and of France particularly depended on,
and relied on working together and making peace and progress
happen. He said that in the 1920s. The key things is that idea,
that sole idea, was there present, and
could have been predominant. That young mayor in the 1950s
became the Chancellor of Germany. He started to the steel and
carbon society which is now the base of the European Union. But
what is the insight? The idea was there 30 years ago. And then it was made possible. The courageous ideas we have now are
to be true in order to make the world a better place. So why not cut spend ing in
military, why not address the future of climate change and
digitalisation to make real change? Because that is also on
the basis of democracy, inequality reflected
on the debates of climate change, and
relate reflected on the – and
inequality reflected on the axis of technologies might be the key
question of the future of our observe Western democracy
which we all know now that are being pressed on the public
debates. So, the thought I wanted to
share was that related to democracy, I also would like to
share this one, because I think it has lots of strong in sights
that we are facing. We inherit the Earth from our
ancestors, and we borrow is for our children. That’s what we are
facing now. [Applause]. don’t inherit the Earth from our
ancestors, and we borrow is for our children. That’s what we are
facing now. [Applause]. I hope I provided some thoughts
and some provocative thoughts as
Minister Maas said, it’s our contribution in these terms. I
thank you all, and I hope you have a brilliant forum. Thank
you very much! [Applause].>>Wow. Ladies and gentlemen,
please welcome the CEO and co-founder of
re:publica, Andreas Gebhard.>>Guten Morgen. My name is Andreas. [German
spoken]. [Applause]. [German spoken]. The focused question today is
that we don’t take the global players,
which set standards, and form the digital world. This is not the strategy for
Europe or Latin America, or also not for other parts of the world, and to
reconstruct that, or try to do like they do,
because we cannot win that race, because they are so far at the front, but
there are possibilities. We have to choose, and there we come
back to the the point which we have for a long time, and I
would like to talk about that. There is free software and open
sources. We have technologies that set global standards which
are not owned by single companies, and we built on that
governmental side with possibilities normally spent for
military. [Applause]. We have a global digital
society, and we are all at the same level of
communication. We have possibilities to work together from all parts of the world, and
this is a chance for all countries in
the world, especially for Africa, and other regions in the south, and therefore we
at re:publica are very proud that we had a meeting in Ghana, and we saw,
yes, okay, we are all at the same level. Digitalisation is global
movement, and therefore I’m very delighted that you’re all here today because, and this
is especially important for me, we are working on that by re:publica to
create sites where we can talk about the digital society, and when we
look where it normally takes place, then it’s not so much we
think about it. The telecoms shop, for example,
where you buy internet access, but where is the society talking
about the consequences?for example, communications we have
to do, because, during
industrialisation, for example, we had some work unions, and a large … to organise
ourselves but where are the platforms to organise digital
society and make it human? I think re:publica and this site
as well, our approaches where we have to spread it in the society
in the future, and I would like to give that to
you. Please create sites where
citizens can talk, because they feel unsafe,
because of digitalisation, because no-one talks about that, and there are no
sites where they can meet and talk about,
and hate communication, like Twitter, for example, because,
at the end of the day, the most important point is that
we, the humans control the machines and not the other way
round. I wish you a great day. Thank you. [Applause].>>In just a few minutes, allow
me some housekeeping remarks first. This stage is leaning
live streamed, so a big hello to everyone watching
not just here in the audience today but from their homes or
offices. We’re happy to have you join our event as well. If you
want to engage in conversation with us, of course, we will give
the opportunity for you to network during the breaks, and
to ask questions during the Q&A sessions of the panels,
but you can also use Twitter, especially if you’re not here in
the room with us today. A hashtag for this event is
#FUTaf19. We would be very happy to
receive your comments and questions, and also
be feeding those into the panels during the Q&A sessions. The
formats for the panels will all follow the same procedure: each
panel will have a moderator, and each panel will have ten-minute
inputs from all the panellists before they will engage in a
moderated discussion, and then open to your questions and
comments. The sessions will also be
translated into German and Spanish here in the Weltsaal,
and you should have all found your translator headsets on your
seats already. If you’re missing one, let one of our staff
members know. Should you not get a seat here, and you wish to follow a session on the
Weltsaal stage, we have an over flow
room, so, if you want to, you can follow the conversation that
is happening here on the stage over in that room as well if it
gets too crowded in here. And I will be announcing the parallel
programme that we have running on the other stages for you
throughout the day as well, just as a reminder to let you know
what else is happening. I think that’s it. So, it’s my pleasure to move on
and announce the first session s
that we are going to have on this stage here
today entitled Digital Revolution: resetting global
power politics. Apart from European elections, headlines
are dominated by the trade wars between America, the USA, and
China, the 5G trade wars. So the question of who is going to build and who is going to rule
the physical infrastructure for how
digital societies is one of the key questions when it comes to
geopolitics in the world today, and one of the
questions addressed by our panel. During the next 1. 5 hours, we will examine how
digital transformation is impacting on politics and how it is reshuffling or
reinforcing existing power structures. We are excite the to
have leading thinkers from across the globe speaking in
this panel. Laura Rosenberger, Nanjala
Nyabola, Uri Rosenthal, and Oliver della Costa Stuenkel,
hosted by Andreas Michaelis, the general secretary of the Federal
Foreign Office. Please welcome them all.>>Welcome to the conference. We have picked a new format and
it’s digitalisation and analyse its impact on our security and
foreign policy. You could call it an exercise in diplomatic
foresight. The first panel is tackling the question: does the digital
revolution reset global power politics? Like the industrial revolution
did 150 years ago. It’s my pleasure to introduce our
panellists who will each offer a ten-minute glimpse into the
topic from his or her own perspective. The US, Europe, Africa, and
Latin America are represented. Let me start with Laura
Rosenberger from the US, director of the ally ians for
securing democracy, German Marshall Fund of the United
States. Uri Rosenthal to her left is an
esteemed political scientist and practitioner, three years
Foreign Minister of the Netherlands,
co-founder of the Freedom Online Coalition which brings 30 states
together in protecting human rights in the internet.
Uri will offer us the European perspective on our topic. Then Nanjala Nyabola, author of the
book Digital Democracy, how digitisation changed politics in Kenya,
former Rhodes Scholar at Oxford University. Last but not least,
Oliver della Costa Stuenkel, born in Düsseldorf,
but now coming from Brazil. He’s professor for international relations in São Paulo,
authority of El Pais and fellow of the public
policy institute. Before you start your keynotes, allow me to
set the scene: for the last 70 years, the insignia of a major
power was a seat on the UN Security Council, and the possession of
intercontinental ballistic missiles. If you had neither,
economic success could stand in as a power factor, or, when all else failed, the attractiveness of your state or
societal model. The last decade has added a new type of power
factor – a digital one. It’s a truism that digitisation
changes everything, but I believe that we can only imagine
the tip of the iceberg of what digitisation will change in the
long run. This is a vast field, so, for
the sake of focusing our discussion, we have sent the panellists a number of
guiding questions. Of these, I have picked out
three, and I hope that your keynotes, as well as our
discussion afterwards, will bring us a little closer to
answering them. The first question is: how will digitisation impact the current
power balance? for example, will the possession
of conventional armed forces, and nuclear capabilities, still
have the same relevance in times of cyber
warfare and autonomous weapons systems? Will new instruments of
warfare strengthen states with good IT capacities, such as
India, Latin America, or eastern European countries, and will future wars be fuelled
by new strategically relevant resources in the tech realm
instead of oil? Second question: will
digitisation increase global inequalitities? The WTO forecasts that the world economy will grow until 2030 by
a third due to technological advances
such as 3D printing, automated driving, and AI. Will the economic divide between
First world and Third World open up further, or will digitisation expand
international value chains and enhance chances for a more
evenly spread global development? The answers to
these questions will have a great impact on our
security as well. Third and last question: how do we handle a
world where geographical distance has less and less
meaning? On the one side, this is good
news already today – Latin America,
Africa, south-east Asia feel closer than a decade ago. But it has a downside too: any
trend can turn global in no time. Nothing can be contained
effectively. This affects our foreign and
security policy. These remarks lie at the heart of the centre
of our debate. Now, I would like to invite Laura Rosenberger to
take the floor. Thank you, Laura. [Applause].>>Thank you so much. It’s a
privilege to be with you all this morning. I really look
forward to the conversation we are going to have on this panel. Digitisation is changing
everything, as we’ve just heard. It’s coming at a time of our
changes happening in the world. It’s acting both as an
accelerator to those changes, and one that is actually being
shaped by them. So, as I look out at the global
trends that we are seeing around the world, digitisation and
technological change is one of them. The other change I see is
declining democracies, democratic recession in parts of the world, places where
we felt democracy was on the ascent or
had been solidly established but where we see democratic
backsliding occurring, and that is happening at the same time we
are seeing authoritarian regimes around the world exercising
growing power. That’s particularly the case with Beijing but we see this from
other authoritarian actors as well. I think the way that these
two trends are converging, and the way they interact with one
another is one of the things that I’m paying particular
attention to, because I think they will shape the domains of
global competition and conflict over the next several decades
and define the political force that is are shaping power. In fact, as we’ve heard, these
technologies have implications for how power is exercised itself, and I
believe they have implications for where conflict is going to
be carried out. Now, there’s a lot of talk about
autonomous weapons systems, and how technology will be able to
change and shape traditional military
conflict or connected conflict. I believe that these
technological changes will create new battlefields. I think
we are seeing conflicts occurring in new domains whether
we recognise it as such. Indeed, both the Chinese Communist Party and Putin’s Russia define
the information domain as domains of conflict. They see cyber and information
as integrated domain spaces for battle in a way that many
democracy do not recognise. And, in fact, in a way that
democracy are challenged to understand the implications.
These are areas where information control and the
ability to manipulate information is something that
inherently builds power. It’s built into the way that
authoritarian regimes are able to exercise power, and they’re
far more comfortable with doing so. But the flip side is that it
creates real challenges for democracies to determine how, in
fact, to both depend against and deter these kind of activities
when it is our very own and free conversations that are being
targeted. This leads to a blurring of the
line between what is war and what is peace. Between civilian and military
battle spaces. between civilian and military
battle spaces. As one AI futurist said, wars
will be fought by code, not hand-to-hand combats. I think
the question is: have democracy recognised this battle we are
facing? Are we equipped to actually defend against and
deter these type of competition and conflict? And to as on so in
a way that is affirming of democratic values and our way of
life that is affirming of human rights, and that holds those
values at the centre? We know that Vladimir Putin has
said, “Whoever becomes leader in this
space – being AI – will become the ruler
of the world” and it’s been emphasised
art additional as a strategic technology has emphasised a need to combine it
with “social governance”. Those lead
to different models of what technology does for society and
what it does for governance. I believe that the implication is
that democracy has become a battle field. This isn’t just about democratic
values because we believe in them – of course it is, because
we do believe in them – but it’s also about interests. Our very
democratic institutions are at stake in this struggle, and for
me, the question is as we are developing these new
technologies, as we look to be able to harness the positive
sides that we heard several of the opening speakers speak
about, how can we ensure that these technologies are developed
in a way that puts humans at the
centre, that put democratic values at the centre, one that enables greater openness
and opportunity, greater equality, rather than one that enables greater
control, greater authoritarianism, and where
machines are at the centre? It really is a question of not just
what does the future hold, but what kind of future do we want? And how do we right now take the
steps necessary to shape the future in
a positive direction? I think it means that we need to
recognise that, in addition to technological change, a
authoritarianism is a geopolitical force. It is one that is creating
significant challenges for democracies and one that we need
to rally against, one where we need to understand the long-term
implications. It’s not just about 5G
technology, it’s not just about particular kinds
of telecommunications infrastructure. We are seeing, for instance, the
Chinese Communist Party and its proxies export technologies around the
world to create techno- – Huawei’s smart
cities include many significant instances of components and
population control. Many send data back to entities that have
connection was the Chinese Communist Party helping them
develop their own AI capabilities, perfecting them, and growing their own
capabilities while providing greater capabilities to
governments around the world that enable control and manipulation of
populations at scale. This is fundamentally a way that
the Chinese Communist Party is exercising power, and I don’t
believe that we yet have come to the realisation of
how technology is, in fact, leading to different ways of
exercising power. It’s why things like
standard-setting bodies that may sound arcane for
many foreign policy types, are actually incredibly important in
setting norms and rules for how these technologies are going to be used in the future,
for what, in fact, they’re going to look like. We need to
recognise these new ways that power is being exercised. And I completely agree with the
Foreign Minister that what this requires is multi-lateral
solutions. I completely agree that we need to find a new way between the
ultralibertarian and the Chinese authoritarian
model. And I think that, in fact, we do that. It’s going to
require a new kind of long-term thinking. It’s going to require
democracies coming together to understand how we can create
rules and frameworks that continue to put democratic
values at the centre, while ensuring that we have
appropriate regulation and control in place for technologies that can
have a potentially destructive impact on our societies. In
fact, I do actually think that Europe is leading in many ways
on this front. GDPR is a really important
example of a space where Europe has taken the lead and is providing a model for
others to potentially follow. But I also think we need to do
more to update our institutions to reflect these changes. As we
heard of course one point that the UN Security Council and
having a seat on it was really the way to exercise power. It
doesn’t quite seem to be the case any more. I think this has
implications not just for multilateral institutions
broadly but for our governments specifically. Have we orientated our
governments to harness the potential of these technologies,
understand their implications in the medium and long-term, and be
able to work with civil society in the important ways that are
going to be increasingly necessary and to bridge the
divides between the public and private sector? I’m quite
manipulatedful coming from the United States – I’m quite
mindful coming from the United States that the rift between
Washington and Silicon Valley and views in Europe on Silicon
Valley are not particular ly positive, and I think that the
United States has a particular responsibility in this sense to bridge the divide between the
public and private sectors and to put in place smart regulation
that allows for innovation to continue because we have to
compete. But we also need to ensure that
that regulation is ensuring that these technologies develop in a
way that affirms the future that we want to see. It means we do need to rethink
how we are spending on our militaries. I’m a national
security person so I’m of the belief that we need to
continue to spend on our militaries and
deter strategic threats, but we need to recognise these new
battle spaces and ensure we have the capabilities in those
domains to defend against and deter competition and conflict
there. That means very different kind of investments. In many
instances, it means investing in innovation and competition at
home. It means investing in our people. It means ensuring that
those investments are actually giving opportunity and building
greater equality. I also think it means we need to
reinvest in our institutions, like our foreign ministries. As a former diplomat, I feel
strongly about ensuring that we have highly skilled diplomats
who are able to deal with the world as it changes and
to help shape the future. We need to make sure that we are equipping our diplomats for 21st
century challenges. Finally, I think that it means
in this time of change we need to start with our friends and
our allies. And, quite candidly, this is
something I don’t see my government doing enough of in
this moment. I think we face an opportunity
in time right now. It’s urgent that we come together to work on
these challenges, and I hope that we will seize that
opportunity. Thank you very much. [Applause].>>Your excellencies,
distinguished guests, ladies and gentlemen. Decisive and
pertinent questions. Let me be brief and say right
beforehand that, when we talk cyber politics, and cyber
diplomacy, ladies and gentlemen, we do talk global
power politics and, let’s face it, in
May this year, so this month, actually,
for the first time, we saw cyber attacks
being countered in the world by
physical attack on cyber infrastructure. Cyber attacks of Hamas against
Israeli-critical infrastructure were countered by Israeli air forces,
and destroying Hamas-hack ing
headquarters. That is just a new point in this
realm. Ladies and gentlemen, I’ve been
in this field of cyber politics since
2010, and, in those days, it was still the
rosy idea of the initial IT and
internet, who were actually the heroes of new horizons and
all-in democracy. The phrase in those days still
was “internet is of all of us.” On
the civil side, security, not on the military side, but on the
civilian side, security was not on the top of their minds. And
in those days, governments, indeed, were the black sheep. On
the governmental side, this actually gave way to increasing concerns
about cyber crime, and the online
potential or free flow of information in
close societies – Arab Spring, et cetera. Then we have been
engaged in the so-called London Process, the
global conference on the cyberspace, where we try to strike a balance between,
on the one hand, a free and open
internet, and security considerations, and, of course, the other point which is so
important not to forget, social
development and economic growth. The Freedom Online Coalition
initiated by me and Hillary Clinton in 2011
brought together 30 governments which were convinced that we
should really emphasise the need for this free
and open internet for human rights to be applied also in the cyberspace,
and for international law to be not only
for the offline, but also for the on
line matters. And, indeed, we already then saw
embattled NGOs which were trying their access to the open internet, but
were being foreclosed in many ways. Today, concerns do not bear only
upon the states and their proxies, but
they need bear also, as it has already been mentioned by Minister Maas and
the first speaker in this panel, it’s also
about the private corporation s and
their practices and sometimes
malpractices. The picture right today in the
world is as follows: when we take the big
players – Russia – President Putin has pushed through the Russian internet
law, claiming own route and only
domain-name systems. Interestingly enough, he has
offered it by now to the others in tea the BRIX constellation where
Brazil and India are key players which are
now in a transitional kind of sphere between the notion of an open and free
internet, and the threats concerned. When you really narrow down the
Russian internet law, it is all about what is sometimes called the threatening
development from an open, global internet to a “splinternet” – the
nationalisation of the internet. A very bad future it would be,
indeed. China, China confronted with
serious questions about their own global players in the
cyberspace. And in the digital economy. It
has already been addressed. Huawei, and also others, you
know, it’s quite a puzzle. I would only remind you that
still, in the Obama administration, the US
and China came to some sort of
agreement on really countering espionage and
hacking intellectual property. I would say that this agreement
seems not to hold that well. Third, the US administration,
hardball politics and hardball trade
deal-making. Let’s face it, too, when we talk
malpractices, it’s not just on the Chinese side but maybe also
elsewhere in the world. Last but not least, the global
south. We see tremendous developments
there. On the one hand, we see inequality, and, indeed, a digital divide,
but at the same time, we see remarkable
examples in the south, in the digital south, of what we call
“leapfrogging”. E-finance, and Internet of Things taking a
tremendous progress in some parts of the south. And there, of course, we are in
an internet-connected world where the new gadgets also of
the internet of things should indeed be based on a firm infrastructure, not the least a
firm legal infrastructure. Now the EU, Europe. It’s often said in a Triad, the
US inknow serrates, China replicates, the EU regulates. novates, China replicates, the
EU regulates. Yes, the US innovates, yes,
China replicates, but it’s going quickly beyond that. The EU indeed regulate, and I
would say it’s very good that it regulates
in many ways. It has already been mentioned,
the GDPR. I would also add to this the
European cyber security act, which is very important. Cyber
Security Act, which is very important. It gives the agency a
stronger mandate and goes for certification
frameworks. In April 2019, the European
parliament actually amended the European
Cyber Security Act effectively to insert a duty for states as
well as non-state actors to protect the public
core of the internet from infringement, and talk ing public quora, we talk
IP addresses, protocols, servers,
cables, satellites. Let me via that this is not just
for nothing. In January 2019, this year,
Sweden was – Sweden had to cope with an
attack directly on its domain-name
system and root servers. By the way, I think it’s very nice that the European parliament indeed
followed suit on the proposal of the
Global Commission on the Stability of Cyberspace, which
I’m a member of, to come up with these sorts of
norms. Then, this having been said, how should Europe, ladies
and gentlemen, position itself and act upon the challenges in
the global cyberspace? First and foremost, let’s not give into
this. All efforts from the European
side should be based on the unique common values of the European Union –
human rights, honouring international
law in cyberspace, stressing the need for an open and free internet, and also
of course looking for the way to
keep trust in the cyberspace. Secondly, it should insist on
indeed high-trust cyberspace, and this
can only be achieved and guaranteed if
the focus is on sustained co-operation
within the EU, which is also mentioned by the way in the cyber security act
capacity building throughout Europe, and, secondly, effective
co-operation outside the EU. There, we can be safe ground
when we talk about capacity-building
together with the countries in the south. I like to draw your
attention to, for instance, the Global Forum For Cyber Expertise
where good projects are under way – Germany, by the way, is a
member of this global forum. In this respect, and it has been mentioned by Minister Maas, it’s
of the greatest importance for the European Union to make use
to the fullest possible of the leverage of civil society, and to accept
multi-stake happeneder approach to cyberspace and the digital
issues. Third, of course, the European
Union should take a tough stance against states and non-state actors that
are engage ing tempering action in
software production, states and non-state actors that really hamper the
cyberspace by cyber malpractices, vis-à-vis the electoral machinery, elections, referenda, plebiscites, and,
third, states and non-state actors. You see, it’s always this twin
that abuses ICT, social media, for
violence-prone and terrorist purposes. I refer here also to the very compelling Christchurch
Declaration of a few weeks ago. In this respect, too, with ,
let’s face it from the European end, or rather as the Brits would say “from the
Continental end” with Brexit being highly likely, we do lose of
course a direct link to the Five Eyes intelligence
community. Four, the EU should strengthen
its critical raw material strategy
and policies. Key raw materials such as, for instance, lithium, but we have
some 30 others in the inventory of the EU, which are indeed exclusive ingredients
of ships, batteries, and other
high-tech products. This point is really vital. At the moment, China has
abundant resource s, critical resources. This work on stockpiling, and
indeed buying stakes in critical resource areas and plants
elsewhere in the world, not the least Latin America, Mr
President. The last point: the EU should be
diligent in striking the balance between cyber security and cyber-connected socio-economic
development. If the EU wants to keep up with the global players
in cutting-edge technology, like AI, R&D
endeavours are of the utmost importance. That is my message
today. Thank you. [Applause].>>Good morning, everyone. People haven’t had their morning
coffee! [Laughter]. Good morning, everyone. As you’ve
heard, my name is Nanjala, and I’m an independent
researcher and consultant based in Nairobi, Kenya. I have the
immense privilege of being independent which means that I
can say whatever I want and nobody one’s going to fire me! one’s going to fire me! Please
do not invest more in the military! [Applause]. I am going to be speaking from
the perspective, a very unusual perspective, I think, in
conversations like this. I’m going to be speaking from the
perspective of an ordinary citizen. Why do I say that this is an
unusual perspective in meetings like this? Because, for so long,
we assumed that the central reference object of
foreign policy conversations must be the state, and must be
the capacity of the state. And we tend to neglect the
impact of the decisions that are taken at the state level have on
ordinary people and how ordinary people experience the decisions
that are made in rooms like this. So I’m going to be talking
from the perspective of an ordinary African citizen who has to live in this
this emerging, evolving, and highly
contested space. I have to caveat this by saying
I hate talking about Africa in the aggregate. I think when you
talk about 54 countries, thousands of nationalities, you
tend to get a lot of important details lost. So, if I give you the big
picture analysis, I would say 0 per cent of Africans are not
connected to the internet, and you would go away
thinking that is 70 per cent in each of
the 54 countries but that would be completely untrue. The statistics on internet
connectivity in Africa are varied. You have extremes like Cape
Verde where 58 per cent of the population has connection to the
internet, and South Africa 56 per cent of the population using
the internet every day, and places like the central African
Republic and like the Republic of Congo where the statistics go
down to 19 or 20, I think, the lowest percentage on the
continent is about 11 per cent of the population using the
internet every day. So we’re talking about a very
varied territory that is having a varied experience with the
internet. And that is why my research
focuses on one country. That’s why I write about Kenya primarily because it’s – it
allows us to get into the reeds of what is
happening. Kenya represents an interesting example of what
analysts like to call leapfrogging. I think it has all
of the advantages, all of the benefits, all of the good things
that we think about with the internet. As I said,
leapfrogging, we are talking about a country where 83 per
cent literacy rate, 88 per cent per
cent internet penetration rate. Please don’t focus on the
mathematics, but according to the communications authority of Kenya, we have 112
per cent mobile phone penetration. We are good at marathons, maths
not so much! We’re talking about a young generation of digital
natives. 60 per cent of Kenya’s population is below the age of
35. A whole generation that has
grown up who didn’t remember what the world was like before
the internet. 78 per cent of Kenyans were connected on the
internet are doing it on their mobile phones, so the mobile
phone has really transformed how people
are inter acting using the internet, and the the
government has taken in the. We have massive investments in
government digital presence. As we speak, we’re in the middle of rolling out a programme to de register all of the new – all of
the citizens in the country. If you want a new driver’s licence,
passport, you have to use the internet to do that. What does
that mean in practical terms as a citizen? What does it mean for
people who don’t? Only 12 per cent of Kenyans have
computers in their homes. What does it mean for the 88 per cent
of people? These are the questions we are
dealing with beyond the glossy
high-level terms of leapfrogging, and the massive
investments we are seeing in tech, what does it mean for
ordinary people? We are also seeing the
challenges. We are seeing the securitisation
of the internet space at an alarming rate. We are seeing predatory capital,
external investments, that are distorting how technology is
rolled out in the country. The 2017 election is a great
example of some of the phenomena we are talking about here. I
want to take a little bit of a side note here to say when you sit in
Europe, we hear a lot of fear of China, and China’s going to do
this, and China’s going to do that. Let me speak to you as an
African. We are equally afraid of both of you! [Laughter].
[Applause]. Because the company that did the
most miss chief in the last Kenyan election was French. The company that has been
received the most money from politicians to invest in
distorting the social media conversation are American. The companies that are – yes, 0
per cent of the IT infrastructure in
Kenya is built by Huawei, built by China. There is a concern in
that regard. We are seeing predatory capital from the US, from Europe, from other “Western” democracy that are
distort ing how it is affecting public life in Kenya, and in
other African countries. ies that are distort ing how it
is affecting public life in Kenya, and in other African
countries. Sitting as I do in this unique advantage point,
going with the theme of this conference, I see how we
are resetting power politics. We see a lot of potential. The biggest sector of Kenya’s
economy is agriculture by and large, but, in the last year, we’ve seen 38 billion
shillings, or $38 million investment in the tech sector,
by far the largest in Africa. We’ve seen people who are living
in a country that doesn’t have natural resources, didn’t have
until recently confirmed oil deposits, didn’t have any kind of resource that would put
it on the map in traditional ways about international
politics, suddenly becoming a focal point for
foreign direct investment, becoming a focal point for
organising developing economies. We are seeing a shift in the way
citizens are engaging with their states, which is forcing a shift
in the foreign policy in the region. We are seeing a country
that is finally able to position itself in the
centre of these emerging discourses on how the internet
is going to transform, how we do politics on the continent. Young Kenyans speaking directly
to the African Union, speaking directly to the East African
Community, and putting their – on the agenda. For example, what
comes to mind, for example, is the Free Bobby Wein
campaign run from Kenya, speaking into
Ugandan politics demanding the release of a political prisoner
running for office against the incumbent. This puts freedom on the agenda
in the regional politics. This puts the needs and the demands
of a young populous on the agenda
in a region that hasn’t necessarily responded to that in
the past. So, three themes that I guess I
would like to leave you with, in terms
of resetting power politics, and sitting in Africa being part of
this conversation, cyber security and the surveillance
economy is something that we are extremely worried about as
citizens. As I said, 70 per cent of the
ICT infrastructure that is being
built in Africa is being built in China which Huawei. When I
sit in rooms like this, I don’t hear people saying let’s not
build cyber security architecture, I’m hearing let Europe build your cyber security
architecture. As a citizen, I don’t want
either. We are empowering administrations to clamp down on
freedom, to clamp down on citizens, to punish people
simply for dissenting against the mainstream opinion. There has been agreements
between China and as I am to Zimbabwe to produce facial
recognition software. We haven’t yet cleaned out how to get clean
water out of every single corner of Zimbabwe. There’s something
there about how we are prioritising this sexy you
in realm of conversation is shifting priorities in foreign
policy that that is detrimental to us as ordinary citizens. We
are seeing states investing a tremendous amount of money in
surveillance, in ID technology. In Kenya right now, we’re in the
middle of this massive citizen registration drive that will cost an
estimated 6 billion shillings or $60
million. At the same time, we are recovering from the impact
of a devastating drought that left many people dead last year.
What does this say about how this conversation is shifting
agendas and distracting from other issues that should be
equally important? I’ve alluded to this, so I won’t
go into great detail because I see the timer flashing red. Africa, the second theme, Africa
as a subject for conversation and not an object for foreign
policy. We are talked about, we are
talked to. We are stuck in the middle of
this great power conversation, but
we, and especially we as citizens, our priorities are
simply not in the room. We are an afterthought. Again, I want
to underscore as we are talking about China, we should pay
attention also to Israel which is also selling significant
amounts of surveillance and policing
technology to various African countries to hack into phones,
computers, et cetera. We are talking about elections.
Elections in Africa with big money today. There is a lot of
money to be made in African elections. And companies all
over the world have noticed that. We talk about democracy
and the values of democracy, and we talk about
how protecting democratic interests: are we paying attention to how this
intersects, how our money, basically, is being sent in
shaping electoral behaviour in different parts of the world?
That ties into my last point which is the growing influence
of private capital. As I said in the beginning, we tend to think of the state as the
natural object of foreign policy analysis, and
so here we are going to talk about the tension between the US
and China, even though it’s playing out as a trade war,
as the influence of corporations. This presumption that all
capital is good, and all money-making
ventures are good, regardless of what they do to the other side
in the other side of the world, is a dangerous presumption, and
it does not hold water. I’ve alluded to this. I’m
underscoring a very quick point. The biggest per variouses of
these – purveyors of these technological
black boxes into which we put data, I
reckon connectivity, and on the other side comes out chaos, are
private corporations. So are we thinking as we think
about EU regulating, et cetera, are we expanding the
conversation to include how are we going to address the
multi-national internet corporation that is housed in
the United States, that is answerable to US shareholders, that is
answerable to US ledge slatures, which is
causing genocide in Myanmar? Are we thinking about these new
connections and how we are going to organise around them? I’m
going to stop there and leaving you with this thought: technology
isn’t going to do anything. We are going to do things.
Technology is shaped by the people who build it, and shaped
by the people who use it. And so this ceding of power, this passive
position of technology, is the most dangerous thing we can do.
We are the ones who are going to do. Thank you. [Cheering and
applause]. anything. We are going to do
things. Technology is shaped by the people who build it, and
shaped by the people who use it. And so this ceding of power,
this passive position of technology, is the most
dangerous thing we can do. We are the ones who are going to
do. Thank you. [Cheering and applause].>>Good morning, ladies and
gentlemen. That’s a very hard act to follow. I will give it my
best! It’s a great pleasure to be here. Briefly, I want to speak about
the Brazilian perspective of the same caveats, that I think
there’s no way to give a Latin American perspective, such a
diverse region here, but I would like to focus in particular on the geopolitics of new technologies,
and how bras nil particular reacts to
the trade war and the growing tension between China and the
United States, and how 5G has become sort of a
symbol of how this will play out, and how I think today the tensions between the
two major powers are shaping politics,
that way that I think will change domestic politics in a
way that the first question somebody asked about a political
analyst in Brazil today is how they stand on this particular
issue that involves China and the United States, where
actually our government is deeply divided, our president travelled
to the United States. He’s very much a supporter of Donald
Trump. Donald Trump told him in addition to helping the United States on the
– what I want you to do is limit Chinese influence in Latin
America. This is something that you could do which would make
Brazil a key ally of the United States. And then the
vice-president travelled to Beijing and I think heard
something somewhat spectacular, but of course just the opposite.
And the government is deeply divided, and I think that I would like
briefly to explain: I think China is winning this
confrontation in Latin America, because I think a lot of the
strategy that we are seeing from the United States, in
particular, the arguments used, may make a lot of sense in
Europe, and in the United States, but are actually
counterproductive in a developing country. I think a
lot of the arguments we’ve just heard from the Kenyan
perspective do apply to Latin America as
well. I think the – it was almost
funny. We had a meeting with Trump representatives who said
that Brazilians should consider when hiring
Huawei to install the 5G network that
Chinese spying was a real issue. Considering that, in 2013, what
derailed completely the bilateral relationship between
the United States and Brazil was the NSA spying
scandal, and Rocef at the time cancelled
a state visit after not receiving a formal apology from
the Obama administration. Doesn’t help that mixed with
this “be careful with the Chinese” we see a return of the mono doctrine
which is being supported by the several leading Trump
policy-makers. In that sense, I think that
there is a sense that these issues like
Huawei and 5G are much more pawns in a broader war that countries in Latin America
are at the receiving end of that and must continue to play a game
that they’ve actually played during the Cold War, which is to
seek somehow to benefit from this confrontation which
actually some countries during the Cold War
have done quite successfully. Another difference of course, is
where, in Germany, there are legitimate concerns about
industrial secrets by using Chinese companies maybe at risk, that is not the case in tea be
at risk, that is not the case in the –
where many of these arguments, I think, are less of a concern than cost issues, for
example. You know, that, of course, in a
sense, I would expect most countries in Latin America to go for Huawei,
largely because it would be not possible from a financial point of view to look
for their other options. The key US message in Latin
America, particularly when it comes to new technologies, has
been about China. I spoke to a Central American
diplomat who said that all the United
States talked about was China when they came to Latin America,
and they sounded a bit like a jealous ex-boyfriend, who
is sort of concerned about the shift that is taking place. I’m
not saying that China has the better strategy, I’m saying as
an observer, I think that we
currently see a tremendous shift in the
region, and paradoxically, the lack of soft
power and complete visibility is actually an advantage to China because I
would think that most Latin Americans are unaware of the fact that China has become
the major trading partner of Latin
America’s main economies, such as Brazil. It’s a key investor, but a lot
of when you look at surveys, actually, a lot of people think
that the United States is dominant, so, in a weird sense,
despite China being so influential, it’s used as a way
to balance US influence there. Now, China clearly has a
first-mover advantage when it comes to 5G. Again, I think that
will be decisive. At the same time, the US has
greater innovative capacity. That will create tensions. I
think we are only seeing the beginning of that right now. I
think it will be a key factor in Latin American politics. I think
both Beijing and Washington will increasingly pressure Latin
American countries to take a stand which is very difficult
because they also have to make decision about the road
initiative. I would expect the Chinese government to connect issues like Huawei,
5G and the BRI and in the way that
trump is saying about Brazil that we will support’s other, – OECD membership if you decide
to install Huawei networks. In that sense, I think it’s an
interesting region to look at to see how this plays out in the
developing world. Latin America will only represent I think five per cent of 500 5G users by 2025. There is also
a battle for global standards. I think in that sense, Chinese companies will accumulate
developing countries in that sense, making it a bit easier to dominate this race to
establish global standards in this race. I think we clearly have here a
geopoliticalisation. As we see the sphere of tech
influence, the games that observers have
played in the past seeking to benefit from both sides will
become increasingly difficult because I think there is growing
concern that there will be a separated
world of two different spheres that have difficulties operating
between each other. I think that leaves us with many
questions about the basic assumptions we are making about
globalisation. I think it will make scaling technologies and
developing, and using in this and allowing knowledge
to flow freely of more difficult, as soon as these
things are profoundly politicised. Brazil in
particular, just to finish, I think is an interesting case to
look at. It’s a very pro-American government, seek to
mover closer to Washington. There’s a tremendous resistance
when it comes to doing that, because, in
particular, agriculture, Brazilian agriculture, believes
the new technologies will be tremendously helpful to increase
productivity, 5G’s a key element, and that has been
mentioned here before, to leap-frog, take
the next steps, and actually catch up, considering the many challenges
that Brazil still faces. Just on a final note here, I
think it’s also worth pointing out that there is of course a race in Latin America
to be the first country to develop these new technologies
and that also I think will be playing out in Chinese
favour, because, again, there isn’t really a non-Chinese alternative that a
developing country can opt for at this stage. Something fascinating to watch,
and I think it’s an element of this
geopolitical size of digitisation of technology that will be a key
factor in understanding Latin American politics in the next
years. Thank you very much. [Applause].>>Thank you very much. A big thank you to the
panellists, and now something has happened that sometimes,
very regularly happens in panel discussions, thank you very much
for setting out your views, I think, that has given the
audience that has given us a very comprehensive
way of looking at the subjects, but, we are running out of time. We have a good 14 minutes left
for this panel. I would suggest I just put one question to you, and, after that, we
allow the audience to interact with us. I think that’s
particularly important. The one question I wanted to ask
was, because it’s almost like a running theme in your interventions, I think
none of you could present her or his
view without at the same time at least mentioning China three or
four three or four times. times. The question I have is of
course digital technology is born out of free societies, but
when we look at authoritarian societies, is it suddenly a tool that thrives particularly
well in authoritarian societies? Do we have to prepare ourselves
for a confrontation with authoritarian regimes in whose hands digital
capacities and capabilities suddenly give them
an advantage by the power of three, four, or five, because, once you’re
ready to apply it in this non-regulated
way, how it is applied in authoritarian
societies internally and external ly we are facing a problem where is it over
estimating the capacities and capabilities of authoritarian
regimes here? Laura? >>Sure. I mean, I decidedly believe
that, at least for the moment, many of these technologies, and
in particular the potential of general AI
applications, artificial intelligence applications at the general level, in the facial
recognition technology, audio recognition technology, combine
that with the social credit system they’re enabling, do
enable control, surveillance control and
manipulation of populations with an ease and scale we haven’t
seen in the past. The digital technologies we are talking about fundamental ly affect and
potentially govern the information space, the
information architecture. Many of these technologies as they’re
changing are going to decide what the information
architecture looks like. So I think that that has applications
on two levels. One, who do governments do with
that information architecture? Do they use it to enhance their
own power? Or do they use it to enable free
societies? And I what we see is many of
these technologies do enable the control which is so criticisedal to
authoritarian power. So it enables authoritarianism,
but I think because we have in particular the Chinese Communist
Party and its proxies leading the development of many
of these technologies, it then enables the spread of authoritarianism because it
allows for export of these technologies in a way that
enables the control of populations more broadly. I do
believe that at least at the moment that is where we are
headed. I think that the idea of a
confrontation is very much in the cards if we don’t course-correct some
time soon. >>Yes, I, again, you know, a
different perspective, I think that the ways in which
technology is deployed is necessarily going to reflect the
existing characteristics of the state, so, if we are thinking
about is it going to empower authoritarian
regimes? Yes. Because that’s how any state that wants to exert its authority will take
whatever opportunity presents itself, and this is one of them. I like to think about this
particular concept using Bentham and Foucault’s panopticon. It can’t
see everybody all of the time, and but like to make you think
it’s can see you all the time. It wants to make more people
think they can be seen all the time. It means that it is almost
impossible to process all of the information that is generated,
all of the data that is generated by citizens every
second of every day, you’re talking about people going through borders, people
paying travels. It’s almost impossible to process all of
that information. What technology is doing is making it
easier and faster, making it more feasible to do that. Who to we’ve seen, seen in a lot
of African countries, I think about internet shutdowns and how the
existence of this capacity, this ability to switch off the internet at the tap,
we’ve seen it being used more and more. We had 17 internet
shutdowns in 2016, and, again, that is merely a
reflection of what the states were already doing beforehand, so, yes, but also it
is more of the same. I also wanted to add the
disparity of resources between the citizen and the state is the
thing that is of concern to me, that inasmuch as citizens
might be organising and using these new technologies to
organise resistance against authoritarianism, at some point,
the disparity of resources becomes so great that it is
impossible for citizens to overcome what the state is
doing. I’m thinking about Egypt in this case whereby a lot of citizens have
used what technology was available, social media, email, whatever, to organise
resistances against authoritarianism and it
looks like this is it, a big
transformation coming but then you talked about the biggest
military in Africa supported by the biggest military in the
world that has almost an endless amount of money that it can spend on policing
its own citizens, at some point, even,
you know, 50,000 teenagers with the best
smartphones in the world cannot compete. That’s the kind of
challenge that we are seeing especially from
activists, organisers, citizens, in African countries. That’s the
challenge that we are facing. Very quickly one last point:
again, China, China, China, but the
last big scandal to do with the internet and surveillance and
policing, the last big one to break in Africa was not
China, it was Israel. The WhatsApp, where you could
get a wall from a WhatsApp line, and you didn’t even have to pick
it up, but it would plant surveillance tech on your phone
so people could listen in. This was the week will have before
last. It could listen into your conversation. A number of
African activists were targeted by this particular bot
specifically to police their behaviour. We are seeing China,
China, China because it represents another metatension, metaconversation
that is happening between the US and China. It’s six of one, half
a dozen of the other. [Applause].>>I would just want to add,
because I had the privilege of serving as
Ambassador in Israel, and knowing the IT industry of the
country very well, just to make a small distinction between China and Israel, not just? Size
but also in quality. In China, most of the abusive activities
are state-driven, whereas in the case of Israel, you really have
to see that they have a vibrant IT industry, and an enormous
private sector, so that is a slightly an
important difference phenomenon.>>All I know is my phone is
being listened to. >>To allude to that, I think it
was interesting when we were
discussing, or discussing this in a region, I think the way the
German government has dealt with this is dealing with this in a
constructive manner, without using the good against evil
things, where we actually look at this in a technical way, and I
just think it’s interesting, like, to what extent do these technologies
cause authoritarian phenomena? I think
they my facilitate it where there is a fertile ground to do
these things. The US foreign policy adviser
was in Berlin yesterday, and she said if the German government
used Chinese technology, it would be
importing authoritarianism. I just don’t see that. I think the
German government and German society is very concerned about
these issues, it’s capable of establishing roles and norms
that could possibly use the benefits that
such technology provides without allowing authoritarianism to
spread. There was a case where the
Uruguayan government bought a number of
security cameras from China. The US government said
immediately this will be a threat to democracy in
uring guy. I don’t – Uruguay. I think things like the Arab
Spring, you could say the genocide in Myanmar, all of
these things may also happen without all this technology, so
I think the key is really to find rules
and norms, and I think in this this particular case, the EU is
an inspiration to developing countries because it seeks to define a
middle way there and doesn’t think about these things in
purely geopolitical terms. >>Let me say that talking about
this, it just pops up in my mind, and, of course, there are
differences, but in the UK at the moment, there is a
fierce debate developing about the
facial recognition surveillance on the streets. On the streets
of London, and elsewhere. A second point, talking China, I
would say that of course I mentioned
this easy notion of the US innovate ing
China replicating, but the Chinese are far beyond that. And I would say that, when we
would – you know, we have to be careful about it. And let me say that, in China,
to my knowledge, there is a on the one hand, of course, it is
all about the state, and the party, the dominance of
the party, yet at the moment, you
see a tremendous development in the
educational system where, for the first time in Chinese history,
students are indeed flustered to ask questions to
their tutors, and that shows how they try really to pick up the
positive sides of the way in which we reare
developing technology, et cetera. Last but not least, I would say,
talking about this whole issue you raise, and you raise it in a
right fashion, I would actually withdraw again to my notion that when we talk
about Europe, it is of course we can’t do what high-tech guys in
Silicon Valley have been able to
achieve, but we should not give up, and we
should indeed do our utmost to use our
research and development potential to also see to it that the way in which we develop
technology is really in balance with our values of a free and open
society, and, for that matter, I’m still a popular kind of person from my
academic career, open societies are always winning the game sooner or
later. >>Thank you, Uri! [Applause].>>Let’s open the floor for two
questions, looking at our clock. I can see one hand there.>>I’m Minister of the
Presidency of Nicaragua. I have a takeaway that might
necessitate some more commentary from the panel. With regard to Nanjala and Oliver, I have a
conviction that this is beyond the Cold War. The NSA, and that Anglo-Saxon
club, the Five Eye intelligence club are part of the game too.
They want all the intelligence they get. They control all the
communications they can. They control the bank transfers,
they enforce illegal sanctions with this information. A second takeaway I thought was
very telling on Oliver is the situation in Latin America now
where the Chinese are offering the Belt End Road, the 5G and the US
have exhumed the skeleton of the
Munro doctrine. That’s a no contest, and I think
that that was a very interesting thing. I think on a positive note,
although, as Nanjala mentions, the use of the internet to organise resistance
to the lack of privacy, or invasion of
the ability to have a free society,
is very real, that space can be occupied, too. If we look at the
16 -years-old who are on the street reclaiming the –
demanding their future with regard to climate change,
European politics is changing, and it’s just beginning to
change, because these 16-years-old have proven with
the internet that they can organise better than the adults. And, in two years, they will be
voters. Theirs will be generational
change which could be significant in politics around
the world. >>Thank you very much.
[Applause].>>Can we take a second
question, if there is a second question from the audience? >>I’m from the … National
Fund. Thank you very much for the last quote. I think also
that we have, in fact, we are looking all the time on the big
players like China because it’s so easy to name them, but I think we
have a transition in the societies all
over the world that is from hierarchy and dominant power games to a
networking way, and it’s so powerful just
here in Germany, when a YouTuber changed the discussion in an
interesting way, and I think, yes, Egypt had 50,000
young people with mobile phones and a military, but it doesn’t
work well. We should more focus on the potential how we can
support this networking between the people, how we can ensure that, and I think Europe plays a
very important role, and I hope
Israel and China well, yes, they are there, but there is a huge
amount of people who will really thrive for more new
way of participating in the dialogue of policies.
>>Thank you very much for the two questions. And I would like
to invite our panellists to react within 30
seconds each, if possible! Whoever would like to respond?
Uri, you can do it. >>On the second question, in
any case, when we talk internet, when we talk cyberspace, my
position is, and I’m fighting for that throughouted
world at the moment with my global commission too, is that,
when we talk about internet, et cetera, we talk actually about
what is called in the diplomacy terms the 1. 5 track, which means that it is
not just a matter of states and
intergovernmental kinds of negotiations but civil society is there to play its role, and
civil society plus academia, plus the technicians, plus the specially
the NGOs have to play a very important role, and should not always start talking
from the defensive side. They are a very powerful agent
in this field, and that’s the way it should go. And by now, my anxiety,
actually, is that some countries which have actually been okay with this notion of
multi-stakeholderism in the sphere of cyber, and which tend
to get back again to occur again to nearly state questions.
>>I want to bring another African country into the room
that is currently going through a lot of these things, and that
is Sudan, because I think it highlights a lot of the
complications that we are talking about in terms of power disparity s, and in terms
of who gets to be in the room during these conversations. Right now, we are seeing Sudan,
a transitional military council that has ended nominally a
30-year regime, refusing to cede power to
civilian administration and seeing hundreds of thousands and
if not millions of Sudanese people across the country out in
the streets every day resisting this decision. A lot of the
reason we know this is happening is because all of these young
people are on social media. The first thing that the former
regime did when the protest started is
they switched off the internet. Sudan started the year with an
internet shutdown. This should tell us that, number one, the
power to switch off the internet at any one time is an immense power
that a lot Feof authoritarianism
regimes are prepared to exercise. That complicates the
power analysis about who gets to be in the room. What can civil
society do if the government is the ultimate decision-maker? But the second thing is that
again people are in know serrators.
People will find a way to circumvent. How do we protect whatever
spaces are left in order for civil society,
or op significance parties to be able to continue to resist.
Finally, that social change doesn’t happen with a log frame and
targets, and a two-year project cycle. People have been working
on democratic issues in Sudan for 30 years. People have been
in the streets. I have friends who have been arrested at least 15 times for refusing
to comply with the military administration in small daily
ways. This critical mass, the thing that is happening right
now is not happening because of social media, it’s one of the
tools that people who have been doing this work for a long time have
been using – are using in order to
amplify their cause and conversation. I go back to the
point I made that technology isn’t going to do anything. It’s
people who are going to do things, who are going to
determine what the space is going to look like in the next
five or ten years. [Applause].>>Thank you very much, Nanjala.>>From a couple of of those
threads in those questions, it picks up on one of the comments
from earlier. If we look at the question of
geopolitics, it’s sort of implies, looking at nation
states, and governments, but, in fact, some of the most powerful
actors on the geopolitical scene shaping these dynamics are
private-sector companies. Facebook in particular, with the
number of people on its platform, but as well as Google and YouTube, the
number of people using those platforms every single day, they
are shaping realities. They’re making decisions that govern and affect people’s lives every
single day. Not only, you know, relating to things like the genocide in Myanmar,
but when Facebook has made changes it points in time to its news feed algorithm
to differently prioritise different inputs, they’ve almost
put out a business several newspapers that are the major
newspapers in countries around the world. They are creating re-life
experiments on us, citizens, everywhere around the world. We
don’t have accountability there. We don’t have transparency
there. And I think this is one of the struggles, because I will say
that I’m happy to get into later some of the differences between
the US intelligence community and the Chinese
Ministry of State security – there are a lot of differences –
but the challenge for the United States is that while these companies are built and grown
out of US innovation, they are not controlled by the state. We do have ah rule of law, and,
yes, we do need to be doing more to legislate and put guardrails
around this. At the end of the day, these are private
companies, and what does it mean to essentially have one
individual younger than I think probably most of us on this stage governing the
decisions about the information that citizens all around the
world see every single day? And I think that is an entirely
new question in geopolitics that we haven’t faced before.
>>Thank you very much. I don’t even attempt to sum up this
discussion. I think it was multifaceted. I
just want to say that, because we are in this house, and a big
thank you to the President of Costa Rica and
Foreign Minister Maas, they sat through the whole discussion of
the panel. That underscores how important the subject is also
for leaders of governments, and for foreign policy
practitioners. I think what we know is we can’t complain that we have too many
or too few analogue problems we are dealing with, and we know
that we are entering a phase in which we are really
facing a very, very serious debate about
the digital dimension of the reality in which we have to
conduct foreign policy and security policy. I think this
panel has underscored it again, so, thank you very much, a big
thank you to the panellist and the President. [Applause].>>A big thank you to State
Secretary Michaelis for hosting this first panel. A couple of
more minutes of your attention would be fantastic. I just
wanted to point out a few things to you to you. I wanted to let
you know that we have graphic recording of our sessions so
there is an artist backstage putting together a different
kind of documentation for you. We want to give you the full
half-hour break, so we would love to see you all back here at
11.40 for the next panel. We’re going to have a deep dive on trending technologies and their
effects on politics and democracy. At 11.30, our
parallel programme starting on our stages on the same floor
here. You can find all of this information in your programme
booklet which you should have received at the entry but we
also have more copies for you here. Just as a very short
overview, we will be starting our geo political
particulars ideation lab, so it is possible for you to dig
deeper into some of these topics on these topics
in the other rooms, and the reading
courtyard of the library. There seems to be bustle already. I
let you get on to the break. Enjoy the parallel sessions and
see you back here at 11.40. [Break].
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. . [Music]
>>Welcome back, ladies and gentlemen, here in the beautiful
Weltsaal. In case you arrived, I’m
Geraldine, and your stage host here for today.
Before we begin with the next panel, a couple of housekeeping
remarks. As we pointed out at the end of the last session, our
parallel programme has started, so our Joe I don’t
political ideation labs are happening in the in the other
rooms at the end of the corridor. As a reminder, we are
filming the sessions in this room but not in the other room in case you want join one of the break out
session s this afternoon. In case you were here for the last panel, and enjoyed Nanjala’s
intervention as much as I saw many people report on Twitter
that they did, I just wanted to point out that she gave an
amazing keynote at re:publica, so you can watch a full one-hour talk
of hers by find ing her on the
re:publica YouTube channel. All of the content of re:publica is
freely made available as will be the videos of the sessions
happening here at the Weltsaal stage today. So, our next panel is going to
focus on disruptive technologies. It’s going to deep-dive into the
most important and disruptive technologies and discuss how we
can use them for greater public good rather than to dismantle
our human rights. The panel will be moderate ed by
Coman who is a technology reporter and a news wire editor
at DPA where he covers breaking stories and run the
agent’s – the agency’s Berlin wire. He will be joined by Julia
Kloiber, Caroline Sinders, Kaustubh
Srikanth, and Reiko Kondo from the Minister of
Internal affairs at Japan. Please give them all a big round
of applause. [Applause]. [Music].
>>According to experts, our world will change more in the
next ten years than it has in the past 50 years combined. But
what does it mean in concrete terms? What does it mean to us as
active citizens? What are our visions of possible
futures where technologies can make an impact on the world as
we know it and transform society in the not-too-distant future? What are the chances and
challenges against the background of foreign policy and
political participation? Disrupt radioactive
technologies. Chances and challenges. .
. .
. .
. .
. .
. .
. .
. Hello, everyone, and welcome to
today’s second panel session. My name is Coman, and today, we
will do a bit of future-gazing. So, with the four panellists we
have on stage, we’re going to try and tease out and discuss
some of the big areas of technology that will make major
changes to our lives in the next decade or so. Before we get down to discussing
the chances and challenges that this
development affords us, you’re going to get to hear four visions of how technology could develop and
perhaps also perhaps how it should develop in the next ten
years or so. Our first speaker is someone who
may have already influenced your opinion on open source and open
data, perhaps with a Ted Talk or a talk at
someone’s event. This work is studying the impact
of technology on society, and helping foster tech for the
common good. She is the founder of a feminist think-tank, and also Germany’s
first fund for open-source innovation, Julia Kloiber. Good
to have you here. [Applause].>>The world is a complicated
place. This is not going to change any time soon. In the
last few days alone, we’ve seen a slew of security issues come
to light that affect WhatsApp,
intelprocessors, Cisco routers – that’s basically everybody in
this room in one way or another. The details are complicated and
beyond the scope of this event, but
headlines like these and dozens before them feed the nagging questions in the
back of our heads: can we trust our devices,
our phones, our printers, planes, our cars? Some would say
we can’t. Our phones spy on us, our social networks sell our
communications to the highest bidder, our politicians are
corrupt, our democratic institutions are inefficient, and our political
unions are bureaucratic. It’s easy to play a cynic and be a knowingly and begrudge ing
compliant person in the some that is beyond our control. It’s
easy to defend ourselves in a world headed for disaster. Of
course, your vacuum cleaner is selling your floor plans.
Everyone knows that. How can anyone be surprised? Our
inability to understand the systems around us leads us to distrust,
distrust in our technical infrastructure, as much as into
our democratic institutions or our scientific consensus. This mind set is toxic and
cowardly. It’s haunting our society. But instead of assuming that
everyone is out to get us, we have to generally believe in the
good in people and trust them to do the right thing. This is much harder than being a
cynic, of course, because it leaves us being open to being
taken advantage of. But it’s distrust that lies at
the heart of civilisation. – this trust that lies at the
heart of civil cries. Without it, we can’t form
alliances or coalitions or standardisation Boyes. We have
to be realistic, of course. As much as we have to trust others
will do the right thing, as much as we have to trust in our own
abilities to achieve what we set out to, we need to verify and crucially be able to do so
in the first place. We need to make sure or ensure
that our phones are running the software it should be running,
probe our printers for security problems and not
being endumpered by DRM, we need to enable researchers to do their job if
we want to find security issues in time, but we also need our
governments to be accountable if we want to maintain our trust in
these institutions. Is to what do we need? In short,
we need an open mind to collaborate with people, even
the ones we might disagree with. We need open data so that a wide variety of people has access to
crucial important information, this
promotes accountability, but also enables innovation and
levels the playing fields in areas such as artificial
intelligence. Finally, we need stable
open-source communities where individuals, governments, for-profits,
non-profits, can collaborate on the systems that underlie our
daily lives. So how do we get there? I want to give you three brief
concrete examples. The first one is a pretty
obvious one: funding. We need to publicly fund
initiatives that encourage more people to develop open source,
independent software. We need also to support the
maintenance of important open-source infrastructure. We
are so obsessed with innovation that we sometimes forget how
important maintenance is. And we have to fund research on technology and its impact on
society, its repercussions and its influence on society. We
need to make sure that our digital infrastructure that we
use on a daily basis is secure and independent. And there is
also lots of potential when it comes to funding and the use of
technology for the good or for the benefit of society, and
thinking of tools that increase
accessibility, citizen participation, help citizens to
participate better, or grant us access to information. We need
to publicly fund the development of these tools. One programme that tackles this
funding of open-source solutions here in Germany is the Prototype Fund that I
started in 2016 together with the Ministry
for Education and Research. Over the course of several years, we
are handing out about eight million to individuals and small
teams. This is something new for the
ministry, because usually they are handing out money to big
institutions and companies, but really to reach out to
individuals and small teams and support them
in their open source development. We have had over
500 applications in the first round alone, so that shows that
there is a high understand, there are a lot of people
interested in developing open source infrastructure and
digital social innovation. Our portfolio, just to give you some
glimpse of what we are working on, stretches from civic tech, for
example, air-quality sensing, utilisations of EU subsidies,
infrastructure tools that include diversity and inclusion
before we also have security tools that make it easier and better to encrypt
communications, and we focus on data literacy.
One tool, for example, we fund is a browser extension that lets
you see who is tracking you when and where. When I’m talking
about funding and open-source infrastructure, it’s of course
also important to keep the infrastructure secured that we
already have, and I want to mention a project that the European Commission
started in 2015, called the Free and Open
Sores Software Audit Project. The European Commission, they
asked them what they relied on open source software. They listed – so, to encourage
security engineers to find bugs in these
softwares. What do we also need? To keep an open mind, we need to
develop new tools that bring citizens together and help them to
emphasise with people – empathise with people they might
otherwise disagree with. Here’s an example from the US. It’s a citizen participation
platform called made to define consensus rather
than – the interactive crowd source
survey tool can be used to generate maps of public opinion
that helps citizens, government but also legislators to discover
the nuances of agreement, or disagreement, when it comes to public issues. Last but not least, we need a
solid basis for all of this. We need free and open access to
knowledge and information. We need strong laws and more
resources to make, for example, government data available to the
public, and better tools to not only access this
information but process it and make it readable for larger
audiences. But opening up information is
not only something that governments can do. Also,
companies can lead. Here’s an example from Mozilla
and their project Common Voice. This is an open-source speech
recognition model and the voice data set that was first
published in 2017, and why am I mentioning this? Because voice is a very
proprietary market. Amazon’s Alexa, Google, three
companies at least in the English-speaking world that
dominates the market, and all of these data sets are closed, so,
if anyone wants to do some
innovation in this field, it’s very hard. Mozilla opened up or
crowd-sourced a voice database. I’m going to stop right here.
There are many more examples. This is just three examples, and
I want to add to what minister Heiko
Maas said today in his own address. He said free and open
societies need a free and open internet. I believe that we have
to expand this. We also need access to
information and we need free and open tools so we can make use of
the world and use the digital world and the internet. Thank
you so much. [Applause].>>Kaustubh Srikanth was
previously technology head at a ate-for-profit that helps activities make better use
of technology. Among his accolades as a technologist
fighting for more public good, he helped to develop an online learning
platform that offers digital security training courses for
human rights activities. Great to have you here!>>As a human rights
technologist, I work on a wide areas of topics,
knows include building, digital
security, and privacy education for activists, and researching internet
surveillance and censorship. When I received this invitation
to speak here, I asked the creatator of the awesome
programme here what they want me to speak about specifically. After a bit of back and forth, I
decided to take it back to the basics and talk about something
I think about a lot, something very close to my heart. I sat down in my kitchen last
night and wrote some thoughts down. I hope that this manages
to convey the sentiment I wish to express here today. This is my utopia, a
free-for-all, un censored, and a place where there is for choice
for the user, this where they don’t have to –
where there is healthy competition for
technology companies small and large to innovate, where no-one
feels coerced into compromising their beliefs because everyone
else around them is telling them they have no choice but to do
so. I imagine a future with a
federated open internet. Something we all use, email, is
one of the oldest federated and distributed technologies that is
still widely used today, and it still works – well, sort of. It doesn’t matter whether your
email is hosted by Google Mail, in ah
data centre or in a French basement, you can communicate
with anyone in the world who has an email address. This is
possible because it relies on open standards and it’s no the a
proprietary closed ecosystem that one corporation controls. I
would like to see this happen with every other technology or
service that we communicate using,
collaborate, and share information on. I don’t want to
be forced to create a Google account to edit a Google
doc. I want to comment on a French Twitter post without
being forced into creating a Twitter account. In the past,
we’ve seen last corporations make some progress towards a
federated internet, only to backtrack would it be fair to
say they have a large enough user base and then locked their users into their pro
brightory closed ecosystems. One example of this is when Google launched their first Messenger
service Google Talk in 2005, they used
an open standard standard called Jabber
allowing non-Google users on other small service providers to
communicate freely with their peers who chose to use Google
Talk. They discontinued their support
for the federation in 2014 forcing the non-Google users to
create a Google account to communicate or find other ways
to keep talking to them. Where are we now? We are living in a present where
large corporate monopolies have mass Yale concentrated power.
They are becoming gatekeepers of huge amounts of priceless data
that we produce, building algorithms to analyse this data,
recognise patterns and turn it into information that gives them
the power to influence our society
and politics. Misuse and breaches of this data
have already caused some havoc. We are starting to see the
negative impact of such information
silos, for example, low … . In reacting to the
problems brought on by this service silos,
governments copy them by asking for backdoors
into their closed systems. The governments of the UK, US,
and India have tried to bully Facebook and WhatsApp by
introducing backdoors into their end-to-end encrypted platform.
Earlier this year, the Indian government tried to mandate back
doors in an encrypt ed platform with
proposed amendments to India’s Technology Act which would
require internet companies to take down content deemed
inappropriate by authorities. Such backdoors would put the
privacy of millions of users at risk from governments that have
antagonised minorities and dissidents in the recent past,
and there are many more implications between government
surveil advance concerns. The future dystopian aggregated
data of billions hasn’t had impacts we
haven’t seen yet. In The Age of Capitalism,
“Surveillance capitalists asserted their right to invade
at will usurping individual rights in favour of unilateral
surveillance and the extraction of human experience for others’
profit. ” How do we get to this good
future that I imagine? I’m a human rights technologist. People in my field,
technologists, advocates, they have been aware of these risks
for a lock time now. But it often seems that the
policy-makers aren’t listening to or do not make the space in
time to understand the importance of this. But there is no reason why we
can’t sit down together in a room and talk and work on these
issues today. Social media giants like Facebook have
clarified their willingness to work with policy-makers to
address these flaws but spending millions of dollars to
avoid oversight via lobbying. I’m now addressing the
policy-makers in this room and saying this, this is up to you.
The window of opportunity for opting out of these systems is
closing fast. There may be a time in the near future when to
view pictures of your grandchildren, you have to be on
Facebook, to get an invitation to social events, you have to
have WhatsApp or Facebook. Your ability to discern between
levels of privacy and convenience will collapse and you will no more be able to
hand over large quantities of personal data than you can move
into an isolated cabin in the woods. It doesn’t have to be
this way. We can change it, but policy-makers need to stop being reactive to
technological information and work together with us. The
solution may be to break up these monopolies and force
federated open standards on the internet or
incentivising corporations to fed rate in open standards. In
conclusion, and to reiterate what I said in the big name, we
deserve a future where the internet is not a source of angst, manipulation,
and oppression, but it’s instead
open, free-for-all, uncensored, and secure, in other words, a
future with with a federated, distributed, open internet.
Thank you. [Applause].>>Thank you. Our next speaker
is one of the world’s few, if not only people to be both an
expert in AI and machine learning, and also an artist in
that area. Her research focus on AI,
ethics, and how patterns are used to create
trust in social media networks. I was lucky enough to see her on
the main stage at the re:publica event earlier this year, so I
know we are in for a that is fascinating talk right now.
Please welcome Caroline Sinders. [Applause].>>Hi, porch. Bear with me,
because I will pull my notes out. But I am Caroline Sinders. I’m
going to try to condense a handful of progressivevations
into a short time span. So, let’s see how it goes. But
I’ve been studying how people interact on the internet for
almost the past eight years, and this is the face I make
sometimes when I look at people on Reddit! I focused mainly on online
harassment and how harassment exists as I
would almost say as a design fail, and
how we’re looking at solving or
studying harassment in large-scale systems using things
like artificial intelligence and where policy and design actually
fit into that. I’m going to focus on design because I
started my career as a UX designer but because I believe that design is
an equalising action that – because design isn’t just a
skill but it’s a practice, and it’s a language. What do I mean by that design as
an equalising action? Let’s take an example, actually? How many of you have ever
reported content on Facebook or Twitter
and labelled it as harassment? Some must have been harassed on
the internet? What is the reporting mechanism, actually?
The way that harassment report looks like when you select
content, you have to select the kind of harassment that it is. In that interface, the system is
showing you how harassment is actually viewed from a policy
standpoint by how you select with the what the harassment is.
It is also showing you technically what the system can
do because it has to bucket that information to give it to
content moderators. The design in that space is showing a user
the policy and the technical capabilities of the platform,
even if they’re not aware of that. Now, harassment reporting
is not perfect at all, so this metaphor
is a little broken but it’s showing you through design the
policies and the technical capabilities of that system. I think we can use design in a
way to make AI more explainable, and this is what I’m working on
as a Fellow of the Mozilla Foundation. How do we make it
understandable for everyday consumers, AI? And, again, I
think design is this space where we can start to create explanations and metaphors and
analogies inside of policy. I want to show you something I
made in 2016. This was originally printed in a US
publication at the time called Fusion, now called Splinter, a part of
the Gizmodo Media Group. It’s a diagram I made trying to break
down all the nuances of trolling. In 2016, we were in a very
different place in the world when talking about online
harassment. Inside of that place, a lot of that harassment
was said by harassers, I’m just trolling. We know there’s a
difference between sending someone an inappropriate gif at work and threatening to rape
someone to death, right? When you call both of those things an
act of trolling, you flatten what those instances are. Trolling is an umbrella term. We
need to unpack trolling. I made a diagram after speaking to a lot
of the different policy experts into thinking about how do we start to add new
nuance into this word? We have a matrix. I made this before the
US election and so it’s out of date, and post
the election, anything related to a world leader such as Trump,
sadly, who is my President, oh, my God, would have to go into
the articleful and serious category. This is one of the
things I’m trying to look at and create a metaphor
for is power within trolling. I used this to show that this one
word has many nuances to it and many actions under this word and
definition need to be thought of in different ways. Some things
are less serious and may be a joke. Should that be viewed or seen
under policy the same way as a rape threat? No. Also, my work I like to try to
break down ideas into buckets as you’ve just seen. This is
something I made for a presentation for a bunch of
design students trying to explain what doxing is, which is
the release of public documents, information about a user
released without their consent. Going back to 2014, there wasn’t
a lot of policy on major platforms
around doxing. There are lots of terms of service about I can’t
anti-doxing. What is it? I tried to break it down into what I
thought were bite-sized chunks and how you could actually
explain doxing. This is where design is really important. You
take a large concept and try to unpack it. This is another
progressivevation I made in 2016 after spending
three years studying – a lot of the harassment took
place across many platforms. It really galvanised on Twitter. If
you’re speaking to all kinds of different victims from those
that had almost no followers to those that had 300 followers at
the time, I tried to redesign Twitter in a way that I thought
would actually create more safety on Twitter. So in this
interface, what we are looking at is the privacy settings
redesigned. What you’re seeing in a, what
you’re seeing broken down is the
different kinds of solutions Twitter could
design that could be more nuanced to protect a variety of
different users. One thing you see here is allow
followers or followers to tweet at you. In Twitter, you’re
either public or private. Privacy, and rather safety
doesn’t exist in a binary. It is very nuanced. We’re not public
or private all the time. We have many different ways that we
engage in spaces. One of the things I found really
interesting, if you look at one of these last tweets, it’s “make my
account public but my handle tweeted no
non-followers” – that sounds like going private, doesn’t it?
One of the things that victims have told me is the emotional
negotiation they go through when they make their account private
because they are having to make an action they don’t necessarily
want to do but it’s the safest thing for them to do. And after reading a lot of
Gamergates threats on Reddit, it used people going private almost
as a trophy. She had to close her account down. We forced her
off Twitter. One of the most public things you can do on
Twitter is make yourself private when you’re a victim of online
harassment. It completely changes the interface, it
changes how people see you. There’s one small design thing
has this incredible impact on people’s emotional health and
well being. So this is why I relayed this
out. Going further what I found fascinating about Twitter was
how unaware people were how often they’re interacted with. How many of you have heard about
this weird kind of like inappropriate thing that
happened on Twitter? This woman, white woman going to
South Africa tweeted, “I’m going to
Africa, I hope I don’t get AIDS, oh, wait,
I’m white.” When she got off the plane, her tweet had gone viral
and she had lost her job and facing insurmountable
amounts of harassment. She said she just didn’t know or
had no idea that this was going to happen, even though her
account was public. How could this happen? Well, the system of Twitter sort
of treated her as a noise, a bit of data that no-one interacted with
inside a noisy system. The feedback she was getting that
while she was public, she felt anonymous even though she
wasn’t. What this sort of I think kind
of shows is that a lot of people er, oh, I didn’t change the
slide, sorry. People are unaware of how
non-followers are actually interacting with their tweets,
right? They have a lack of awareness
even if they’re totally public and they have low interaction
that that tweet at any point can go viral. So one of the things I wanted to
add can we start to show data for users about much their
tweets are interacted with with non-followers? Going even further, building off
this other idea I had. What if we added more nuance to Twitter? What would it look like if we
could design conversations that can have levels of semi-privacy or being
semi- milk. What kind of conversation are you – semi
public. When we talk about a – or – it
could be turning something like replies off. That can cut down
on a lot of harassment. You can select different users. That exists on Facebook with
Facebook lists until they removed it recently. Or reduce it to followers-only.
These things create different kinds of conversations and
groupings inside of Twitter. The reason I show this is
because I’m trying to make a case for how design is culpable
in the systems we create and the kinds of conversations we want
to have, and we are talking about something like privacy,
how do we explain it to users? Design is really necessary in
showing people, and creating spaces for different kinds of
conversations. So because of that, I think we
can use design also in artificial intelligence. I think within that, we can use
design as a space where we are trying to create better kind of
systems around artificial intelligence. So this is what
I’m working on with the Mozilla Foundation. It’s an idea or methodology
called Designing for Transparency. It’s inspired by the secure UX
checklist. My project outlines three
principles for designing for transparency
and machine-learning: legibility,
interaction – legibility is the ability to understand. It’s
writing something in plain language in your processes and
intentions that people can understand without a technical
background. That’s not just the first step of transparency. You
have to have auditability or the ability to audit. Your process
was so legible that someone can now have an opinion about it.
Not only do they have an opinion, they can try to offer a or request
feedback. Once they have that feedback, that has to go
somewhere. You have to have a space for interaction or agency. This builds upon legibility and
auditability, that that feedback can go somewhere and people can
see that it is taken into account, or if the feedback is
rejected, they can see why it’s rejected. We see this a lot in
open-sores projects. People can have the ability to fork code or
provide a comment, or volunteer for something app we need all
three of these things to make machine
learning feel more transparent. These are the bases of building
transparency. We also have to go further because we still need
more legibility. One of the things I’m also sort
of progressive indicating in this talk, and I tend to
progressive indicate in writing, is how do we make the data sets
inside of algorithms and artificial intelligence more
ledgeable? How did we show or explain them? Should we have data
peer-reviewed inside of data models inside of artificial
intelligence? Data helps make – data’s almost like the DNA of an
algorithmic system. If you are to build a system that is
analysing social media data, or rather social media, what would
you need? You need a lot of social media data. If you’re to
build a chatbot that helps you purchase pizza, what would you
need? You need pizza data. You need e-commerce data about
ordering data. You need these things. Data is extremely
integral and important inside of artificial intelligence systems
but a lot of data sets are closed off and proprietary and
we have no idea what Facebook is using when they are offering
suggestions in your algorithmic timeline. What does their data model look
like? How old is it? Where does it come from? We need data broken down and
explained in a way that is human-readable
and human-understandable. Maybe we even need rating systems. And I bring this up because if
you look at food, we have our caloric information listed when
you purchase good. You have a breakdown of ingredients. And
that’s an expectation we should have. What we have is this ability to
audit our food. We should have something like that in
artificial intelligence. This is something I made for
Harvard Centre Privacy By Design, a recent publication
that’s come out. A funny story that I won’t get into which is
why I picked Spotify. One of the reasons I show it to you here is
because Spotify is a consumer product. No-one is going to die from my
shitty music on Spotify. And I – the reason I bring it up
is I listen to Spotify every single day, as a lot of people
do, but I have no way to intervene in this one aspect of
Spotify when they create algorithmic play lists, like
Discover Weekly. Another thing I listen to with another high
amount of frequency. Why can’t I intervene in this algorithmic
space with a product that I use every single day? Why can’t I have any readable
way to understand how that product was made? Why can’t I
tell the tool what I want? I have no way as a consumer to
intervene in that kind of algorithmic system. And why?
It’s a consumer tool, right? It’s a consumer product. Why can’t I exist in it somehow? So like writing crowed, I view
design as this necessary tool to help me interact with the
internet. And more importantly, I think
design is as political as code and policy. Thank you.
[Applause].>>Finally, all the way from
Japan, we have the director of the office director-general for
cyber security, Reiko Kondo, who also holds other
technology-related positions here. Today, she will speak with
working in Japan’s cyber security research
institute. Thank you for coming all this way to be here.
>>[Applause]. >>Good morning. Hi. I’m Reiko Kondo, from the Ministry of International
affairs and communications. Firstly, I would like to talk
about emerging technologies such as IoT, and the 5G, with their positive
impacts on our social and economic
activities, and then discuss cyber security issues, so
actually, I’m in charge of cybersecurity, so I’m going to
talk about that today. I’m using several numbers today, so as you can see in this slide, these
technologies such as IoT and AI have been widely diffused and used in
a variety of industries. The number of IoT devices
amounted to about 27 billion in 2017, which
will be forecasted to read 40 – to reach
40 billion in 2020. If we look at the market scale of AI, it has been rapidly expanding
and at a compound annual growth rate. It is expected to be 63. 5 per cent between 2016 and
2025. And here is 5G. It plays a significant role as
the infrastructure of IoT and AI services, and it changes the
world. As you know, 5G realises very
high speed, like maximum transmission
of 10 gigabits per second. A two-hour movie can be
downloaded in three seconds. 5G has very low latency – about
one millisecond. With this low latency, for
instance, precise remote control of a robot in real time comes
into reality. Furthermore, many concurrent connections, such as million units per square
kilometre will be available via 5G networks. And frequencies available to 5G
operators last month for implementing 5G networks next century fork the
Tokyo Olympic Games and Are a . Speak of wired networks grew
about 1.5 million times in these 20 years. And the speed of wireless
networks grew about one million times in 40
years. Reaching ten bits per second.
With the development of network infrastructure, internet
penetration rate reached more than 50 per cent globally in 2018, and it is
forecasted that global internet traffic
will reach 157,000 gigabits per second in
2022. If you look at the market value
of global businesses, these
businesses occupied most of the global top-ten companies in the
year 2017. For example, it is forecasted
that digital economy will constitute
60 per cent of the global GDP in 2022. Actually, as the development as diffusion of internet and IoT, a
vast amount of data which is called
Big Data, has been generated. By utilising and analysing Big
Data by AI technology, we can realise more convenient and
safer societies. For example, combination of traffic data and transport systems, we can
realise more comfortable driving and even
have automatic driving. In this way, with the development of big data, our social and
economic life is shift to go data-driven society. Where cyber systems are
developed. And this technology will gradually contribute to serving social and
economic issues. And a kind of negative aspect:
as this technology is developed widely all over the world, cyber attacks
targeting digital devices and networks have been rapidly
increasing. Our National Laboratory called NIGT, has been observing global
cyber attacks in real time. It monitors a Darknet consisting
of about 300,000 unused IP addresses. Using a system called Nectar, it
analyses the attacks based on collected data. The number of attacks, cyber
attacks, observed last year was 22 billion which was 2 – among
them, attacks targeting IoT devices is approximately a half of the
observed attacks. I would like to show you an actual example of a serious cyber
attacks caused by vulnerable IoT devices. In October 2016, the DNS server
in the United States was damaged by a large-scale DDoS attack
which was caught by a – which was caused by a
large volume of network traffic
generated from over 100,000 IoT devices
infected with malware. Many leading internet services
and news sites using the the DNS server
were seriously affected. Afterwards, it was reported that
most of the internet devices infected devices, used simple ID and the passwords like 1234 or 0000. In
response to this serious threat of IoT devices, our Minister of
Internal Affairs published the
comprehensive of package of IoT security measures in October
2017. The security measures consist of
five areas, and among them I would
like to talk about regarding measures on
IoT device vulnerabilities. When dealing with the vulnerable
of IoT devices, the method of security
measures differ between IoT devices already in the market. And new IoT devices to enter the
market in the future. Security measures can be taking
by using a certification system for
newly sold devices, having certain security measures or deploying IoT secure
gateway. However, these measures cannot
be taken for IoT devices already in the market. And considering the infamous IoT malware Mirari which took over
the passwords and hijacked the devices, it is necessary to develop a system
to widely investigate the vulnerability of passwords for
IoT devices. In response to this situation, we amended the Act in NIGCT last May. It allowed to scan IoT devices
via the internet and identified devices
with improper password settings, such
as 12345 and provide the IP addresses and information such
as time stamp to the related internet service providers. Then this ISP allowed the users
of the IoT devices and urged them to change their password
settings. This active scanning project is
called NOTICE, and we just started it
this February this year. I would like also to mention
about the human-resource development. For a – human-resource development is
quite important in counteracting the attacks. We have taken measures for the
2020 Olympic games. In 2017, the National Cyber
Training Centre was established for a
developing human resources who have practical knowledge to
handle cyber attacks. We are conducting three types of
training courses. I’m sorry for the small characters, but we
have three. The first one is implemented for those who are in
charge of information systems in the ministries, and agencies,
local governments, and also critical infrastructure providers –
CYDER. A total of about 3,000 people
were trained. The second one is Cyber Colosseo
– a kind of good name – which aims to develop cybersecurity
for the Tokyo Olympic and Paralympic Games. It is
conducted for officials, including security personnel in organisation s relate ed to
these Tokyo Games. A total of 137 people were trained in this
exercise last year. And the third one is SecHack365. This is kind of targeting young
people and then tried to cultivate
young security innovators. This is targeting the year of
under 25, and, in order to increase vital
security researchers and entrepreneurs, in IT, it
provides a cybersecurity training programme with hand-on training and remote
software training for those using NICT
research. Last year, we trained about 50
trainees. Finally, I would like to talk
about the information-sharing initiative
for counteracting cyberthreat, ISAC.
It’s an information-sharing and information centre. It has been
established for the industry for the purpose of correcting, analysing, and sharing cyber
incidents from cyber attacks. It renamed ICT ISAC in 2016 having
more participants. Transportation ISAC is under
preparation. Since cyberspace is global, it
is quite important to strengthen
international co-operation for a counteracting cyberattacks, so, actually, MIC
has been initiative ating co-operation
between ISAC, ICT-ISAC since 2016, the
symposium in Tokyo on this. We would like to expand this
activity to other countries and we really believe that international co-operation
is crucial in counteracting cyber attacks. This is all my
presentation. Thank you very much. [Applause].>>Thank you very much for that,
Dr Kondo. Thank you for the great talks. . You’ve given us a
lot to unpack. I have many questions about the
future. We will have time for questions
at the end of the session, so, if
anyone has a burning desire to ask something
specific, don’t hesitate to raise a hand. I’m sure we can
get a microphone over to you. Is to start us off, though,
we’ve just seen three of the world’s
biggest elections take place – the EU
this weekend, the Indian elections,
and another populist country like Indonesia, over a billion
people have voted in the past month alone. Perhaps we can
start things from that perspective to begin with. i.e.
how the development of technology in the next decade
will shape how we cast our vote. Maybe you can help us get
started here, Kaustubh Srikanth, because
you’ve done a lot of research into India. Did technology play
a big part in the elections we’ve seen in India?
>>Absolutely, yes, not just the general elections where Mohdi
won his second term but also the previous general election s when BJP won by Mohdi won in 2014. VJP and Mohdi had a probably
been the most organised in how technology and social media have
been used in election campaigning, and then later
within governments in India. They effectively used during
both the last two general elections,
well, Facebook, Twitter, and WhatsApp
to disseminate information which
was, well, VJP’s campaign, propaganda, a
lot Fe of can’t talk about Indian
elections and not talk about fake news. Modi actually had a an app, an
app called Narendra Modi which is
one of the most downloaded apps on the Play Store and the App Store, giving you
information about what the government and the BJP
is up to. This is a Prime Minister who has
never done a public press conference in the last five
years that he’s been in power. Talking about fake news earlier,
I’ve been reading this like, I just got my hands on this this morning – a
naming foundation article where they look at various different research
reports that have been conducted on fake
news, and elections in India, and also in the EU, and it turns out more than a
quarter of content that is shared by the BJP, Modi’s party, is junk news. About a fifth of the content
shared by the Indian National Congress,
the largest opposition party who were used to a minority in the
last two general election s, is junk news. And this information
is being consumed and then shared indiscriminately
by consumers of this information because a large percentage of
new internet users in India are so new to social media, they
don’t understand the concept of what
bogus content is, and another thing
that it states is that, being the first to share things in
their circles, gives them a rush. So they just will share anything
that is being sent to them. This of course is how a lot of
this propaganda was spread during this election campaign
and the previous one.>>So to bring in maybe a bit of
a contrast from the EU, one of the
most recently Austrians presidential elections – you
will remember this better than me, Julia – the
presidential election had to be delayed because they couldn’t
get envelopes to followed properly, whereas in India, you
have had to get 900 million people to
vote on time while also dealing with issues like WhatsApp
messages going viral, and causing outbreaks of sectarian
violence. Looking forward for the next ten years, do you think
there’s a lesson there for the rest of the world in terms of
how India’s dealt with the technology?
>>One of the things I’m not a big fan of is 100 per cent
voting in I understoodia is done electronically. – in India.
There’s been about safety, there’s a big debate going on
right now about whether some of these AVMs were
hacked into or not during the election – I won’t get into that, but ndia is done electronically. –
in India. There’s been about safety, there’s a big debate
going on right now about whether some of these AVMs were hacked
into or not during the election – I won’t get into that, but as
someone who lives in Germany with an Indian passport, unless
I go to I understoodia, I can’t vote. There is a new bill that may
allow proxy votes in the future but there is talk about giving
non-resident Indians the ability to vote online. That’s where
things are heading in terms of technology and elections. ndia, I can’t vote. There is a
new bill that may allow proxy votes in the future but there is
talk about giving non-resident Indians the ability to vote
online. That’s where things are heading in terms of technology
and elections. I think also the election
commission, which is this archaic body that
has been conducting free and fair
elections in India since 1950, they do an
awesome job. I actually have a lot of respect for the Election Commission of India
because they managed to get 900 people come out and vote over a period of six
weeks with very little violence, or, well,
general – general elections go pretty smooth. They’re starting to embrace
technology in ways that can be helpful and effective, but also a bit
horrifying in some ways, because over the last
few years, they made all electoral
rolls which have been public, they’ve digitised them and put
them on the internet. The scary thing is you can go on
the internet, type my name in, and if you know where I’m
registered to vote, you will have access to a bunch of
demographic information about me, including my address. That
is the scary side of how technology is being used in
elections, but also at the same time, the
Election Commission in the last elections have been able to get
useful information out to voters about where they need to be and
how they need to go about making sure they can get their vote in.>>So what about the issue of
misinformation, then? Perhaps this is something where we can
bring in the rest of the speakers here. Because this has
opinion a major one – this has been a major one for
all the elections, especially since the US one in 2016. We’ve seen some measured
response from social media platforms, not all has been
helpful. A really good example from Germany recently is of
Jewish newspaper being banned from Twitter because of
what was mistaken ly – mistaken as an
attempt at hacking an election by a tweet. Of course, it wasn’t trying to
do that, though. I’m curious how all of you see social media
playing out in that area? Do you think we will look back
at this struggle against misinformation in ten years’
time as something that was just a growing pain of social media
platforms, or is it really something that’s here to stay?
>>Well would be we have had misinformation – we have had
misinformation campaigns around for a long time, just not seen as
political misinformation campaigns. If you look at the
campaign of harassment, they have the hallmarks of political
election disinformation campaigns. They just weren’t
called that, they were called harassment campaigns, and a the
lot of people’s warnings were ignored. If you look at “your slip is
showing” which predates Gamergate in
2012, people on 4Chan created fake accounts
to infiltrate black feminist spaces and harass black
feminists. A lot of women organised and
tried to tell Twitter there are these fake accounts populating
and populating, and populating, and even collected all these accounts in a Storify group
called Your Slip Is Showing. It’s an American black idiom
that lets someone know you’re fronting or pretending. It’s like your slip is peeking
out your address, you’re acting fancy but you’re not. We saw this in Gamergate about
victims. It was in 2014. These things we have had historically
since 2012, and platforms have not responded to them. Now it’s at a much larger scale
and platforms are having a hard time responding to them. I don’t
think it’s a growing pain, I think this is a very real
historical, at least for the internet, problem, right? So I think a lot of this comes
down to honestly how platforms are designed to look at content and the
inadequacies they have around content moderation and how look
at and respond to content that is not written in English which
a lot of, like, a lot of platforms really fail when harassment or mis
information or any kind of policy-breaking happens in a
language that is not English. So I think this is a major,
major problem. And platforms are incredibly
slow right now to responding. >>In my work, I’m looking at
what information do we have and how can we share and spread this
information better? Like, people voting in Germany
might know that … is one of the most popular civic
tech tools where you can compare election programmes and that is
only possible because those programmes are available and machine-readable formats
because politicians are collaborating with it. What
other data is out there? I remember before the last
elections, we were working together with Google, because they said in the US, the
best civic text feature they had was the Polling Station Finder.
They would tell people where their next polling station was,
and in Germany, it was not possible to get the data, so it couldn’t be implemented, and
also the argument was people know where to vote here in Germany, but then search
results, or searches showed that was one of the most prominent questions,
so people were looking up, “Where
can I vote?” Talk about open data around elections, when you
look at what was the voting behaviour of a politician over
the course of the year, right, analysing this, and having
access to this information and working together with journalists to let others access
this information and make it publicly available. That’s like
a different angle. Looking at what is already out
there, or what is information that we just have to spread in more tangible,
easier-to-read or use ways? >>I mean, there’s been a lot of
suggestion s is to ways we can deal with the – suggestions of
ways we can deal with the problem of misinformation. One
that is getting a lot of talk at the moment is this idea of the
fake name ban, something supported by the person who is
likely going to be the next President of the EU Commission,
Manfred Weber. I wonder what you think about using people’s real
names online? >>It’s a way to harass people. The reason I bring that up, does
anyone know what a dead name is? A dead name is like the legal
name a person name a trans person was
given but not their real name. You’re enforcing a legal name.
It’s not maybe not their given name. So you expose people like
so many more groups to harassment, or if you’re using a pseudonym to protect your
identity as an activist and you have to use your real name, it
exposes you to different forms of harassment. There are more
nuanced ways to deal with bots, botnets, like
proliferation of information that doesn’t involve actually
opening up many, many marginalised groups to further
harassment. >>And also what problem we are
trying to fight. At re:publica a couple of weeks
ago, a politician was talking about her harassment case. She
mentioned a lot of the harassers used their real names.
>>If you look at a lot of the white supremacists and the while
national ist s accounts in the US, they use their real names.
They’re particularly proud to be a white nationalist, and they’re
sharing harmful content, misinformation as well as
targeted harassment campaigns. Many of the main harassers in
Gamergate used their real names willingly,
like a real name law that doesn’t do anything other than
hurt those that need anonymity to exist.
>>I would like to make a quick comment, so I would like to stress that
the internet has been developed as the free and open media where freedom of
information has been generated. I just say sometimes an on-I am
ity might be helpful for – anonymity might be useful for
doing accusation thing, but banning a fake name does not
solve issues of this information, and I think that the delivery of media is
really important. We started the discussion about how to control this kind of
misinformation thing, that we started a study group, but wondering whether we should
have any fact-check, or banning the fake names. We need to have a balance
between the free thing and also the kind of
secretive things. >>So I think the common thread
in a lot of your talks was a lack of trust that we are seeing
in various technologies, but particularly in social media. And there’s a need in there to
rebuild that trust. That’s after seeing so many data
privacy scandals, so many hacks. What do you think needs to
happen next in order for that to happen?
>>I think there’s so much to unpack there. For example, how
are we defining trust? What is your definition of trust? And my
research project at Harvard, we’re using a definition of
trust that is defined and I forget, it’s a definition from a
psychologist from the 1960s, and I’m blinking on the
psychologist’s name which is awful of me. But it defines
trust almost as a transaction, an agreement you have between yourself and a platform,
an entity, or person, that they’re going to do something in
your contract that they will abide by it, so, in the
case of friendship, for example, trust may be that I am keeping a secret for
you, and you are trusting me to keep that secret, but the secret – the keeping it
is a contract, like? It’s you, an entity, and an action. It’s
not a passive relationship. So trust with a platform could
be I trust you to hold on to my data, or I trust you to tell me
truths. The truths could be like how you’re actually using my
data, how you respond to online harassment, how you
respond to content, and that you regularly deliver and show that
that contract is enforced and held, right? So, what is trust within a
platform? What is a trust that we have with Facebook? Well, the trust is broken if we
look at how they talk about, like, how
they talk about how they do things, right? They’re very
opaque. That opaqueness, like gra –
degrades trust. Cambridge Analytica is a trust
degradation of how they handled consumers’ data. I don’t know
whether trust should be rebuilt in platforms, because instead of
focusing on trust, we should also talk about policy, and how they
create policy, how they adhere to
policy and if that policy is replicate the across
geopolitical spheres. What is the policy they have in the US
versus India? Is one treated or emphasised more or less than the
other? Like, that is not a conversation of trust, it’s a
conversation of clear and understandable rules.
>>So if you run with the idea of policy, and I found the
example of WhatsApp in India really interesting, because that
was one that was meant for a set of users in India but
ended up having an impact around the world, so, would you all
agree, that looking forward, governments will be able to enforce policy locally for a set
of users that will ultimately impact globally? >>So, what I mentioned earlier
when I was doing the talk was, yes, so
the Indian government tried to
change the IT rules of the information technology act which basically they were
proposed, they were not – they never become
fixed, or they haven’t yet. What they were trying to say is that
companies like WhatsApp, Facebook, and anybody else was
running a communications platform have to moderate and give them access to content
when this is basically to fight fake
news and hate speech right? Basically, what they are saying
is you have to moderate and give us this content which basically
means they have to build backdoors into
encrypted communications platforms, and as soon as you
build something like that into an end-to-end encrypted
communications platform, it doesn’t affect one region or
group in a region, it is – you’re breaking the technology
that is keeping people all around the world using it safe. So, yes, one government actually
changing policy and forcing something like that, and bullying, –
bullying the corporation into opening up their secure
encrypted system, it is basically broken the system for
everybody all around. So that is when you basically have to talk
away and go no, this is not something we can use any more.>>We’re going to open this up
to the floor in a few minutes. I would like to invite you if you
have a question for anyone on stage. Before then, I would
still like to bring up the issue of AI. Do you feel like the leaders of
your governments should be making more decisions with the
help of AI? >>No! Not my President! Not the
United States.>>Really, though?
>>Yes, I mean, there’s a lot to unpack there. One we can make a joke at
Trump’s expense, do you think Donald Trump should be making
any technology or policy decisions? Like no! But, more important, like, as an
American, I think it’s extremely problematic that the majority of
major tech companies are based in the United States, and that
they’re making a lot of policy decisions that are shaped by US
policy. The United States is not the world. It’s not. I think – if a lot of major
decisions around how artificial intelligence is used in a
government, if it is coming from one country, and that country is
already setting standards around technology and regulation, like,
that isn’t a diverse set of
decision-making. More importantly, I guess, it’s like the decisions that are made Nein
a government are also evocative and indicative of who is in
power. And so, should, like, should the
Donald Trump brand of what artificial intelligence is and
its impacts on society, should that affect the next 20 years of
the ways in which we deal with artificial intelligence? No, there are many think-tanks
across the world outside of the United States that have lots of opinions on
how AI impacts negatively and positively society. I don’t necessarily think that
the standard for how we deal with artificial intelligence
should come from one country and one administration.
>>I mean, it’s also, like what do you mean with AI? Is it automated decision-making? Or is it scanning kind of press
reports and press briefings? This is something that is
interesting to look at: what are the old structures, or the
structures that we are currently using, and are they, do they
have bias? Is the press briefing or the
briefing that someone gets, like who is the filter for that?
Where does this information come from? What can we learn from this for,
like, automated systems, because, especially when it comes to not shifting
through, skipping through information, then of course like
artificial intelligence will be used, but when it comes to
automatic – automating decision-making, like there are
some more steps that we have to this about. Who is developing
the technology? There’s been a report by Pro
Publica called Machine Bias. They looked at decision-making
or how, do you say – >>Predictive policing.
>>Yes, they had a system that would score inmates on how
likely they were to commit another crime. Judges would look
at these numbers and these stats and it turns out the system is
heavily bias,ed because the data of course, the data they used
for the system is data from the past
that has certain biases, and there, I
think they also had a court decision that this software had
to be made open source in order for, like, departments that are
using it to see what data is being used, and, yes, what is the algorithm
feeding into it. >>There is also a case in New
Orleans, Louisiana, where I’m from, where
they partnered with Palanteer and
used policing predictive software. A lot of civil
servants were not aware of the arrangementing and the software
was difficult to use, and local civil servants and bureaucrats
couldn’t understand why it was making the decisions it was making, nor
could the technologists on hand. Even when something is
open-source, it’s hard for a civil servant to audit. It’s
hard to understand why it’s making those decisions, so I think
building off Julia’s fantastic answer, we should really also
think about when we talk about artificial intelligence, what do
we mean? Two, like body, there are all
these other problems in using
technology like artificial intelligence in civil society.
It is hard to audit. It is – they’re black boxes.
It’s an earnings edging technology. We are still
figuring out – it’s the an emerging technology. We are
considering what the use cases for it are. If we are going to
have a deeper conversation where AI fits into society and government, are we preparing
our civil servants to as best they
can work with AI or are we giving them another tool that’s
uncomfortable and difficult to use that adds more friction
to their day-to-day lives? Artificial intelligence is not a solution just to – at tiles,
it’s extremely unuser friendly. >>So I agree with you, that AI
is a technology we need to control. Just recently, our
Cabinet Office published a report which says we need to establish AI-ready society,
where a human being understands AI exists and human beings
should not be controlled by the AI. There are some guidelines
for developing the AI and the way we
need to acquire transparency, accountability, controllability,
something like that. We think that humans should be
at the centre of society.>>There is another common
thread, right? If technology is the answer, like what is the
question? This kind of notion that we are
asking should we use AI for that? To put the technology at
the centre, and not the problem that we are trying to figure out or solve, or maybe
we don’t even already need technology for a lot of
problems, or very benign technology. This notion of this there is
this new technology and what are the fields we can apply it to is
counterproductive, I believe. >>So building off that, the
city of Amsterdam does use artificial intelligence in some
part of their processes but it’s to help sort 311 calls for when
people are reporting a tree has fallen, it needs to get picked
up. So artificial intelligence is
used to help just bucket phone calls,
and then a constituent – a civil servant goes in and looking at how things were
bucketed, but it’s used as this basic organisational tool around
Fielding requests. It was used to solve a problem which was we
have a lot of requests, and we need to find a better way to
organise these things. The problem-solving there I
think didn’t started with, “We have the provocation to use AI
in our government. What do we do with it?” It’s like, “We have
this problem which is hard to solve.”
>>Thank you all very much. I would love to keep this
conversation going about AI but I wonder if there is a question from the audience to
wrap up up? s up?
FLOOR: [German spoken]. I have a question for Ms
Kloiber. I have to say something about the fake news. A lot of people didn’t know
where their election office is. They couldn’t inform themselves.
You can get the information. The problem was that some
letters were not delivered, but online, you can get the
information where you can vote, the newspapers printed it. You
can call your office. You can also ask your neighbour. In Germany, the election offices
are where they always are. I wanted to say how can you make
this information more accessible, and also show it in different platforms,
if you know the number what to call, or you can look at the
website. If the information is there, why don’t you open it up? Why don’t you have an
intersection where you give access to other people where
they can put it in their platform so they can make accessibility
even easier? That would also be a good idea,
then you should ask the platform why they don’t do the effort
here. That would be an idea. >>I think the platforms, the
operators of the platforms did talk to the
government, but there are technical problems as well to
find a solution.>>Then I have a question, and
we need the trust, and access for
everybody to important information, and the
sinister should not be bad for democracy
inspect internet should not be bad for
democracy inspect – we had a case with a
YouTuber, we have to see where we can’t regulate this, I see there’s a danger
here that politics . We had a case with a YouTuber,
we have to see where we can’t regulate this, I see there’s a
danger here that politics – in politics for our free opinion,
democracy, and freedom of speech. If a party that has its own
channel making opinions gets attacked, and says then now we
have to regulate the internet, then I think that is a
danger. How do you see that? Is the party system not old
fashioned? If they want to regulate the
internet, is it not a bad sign for
democracy? Then we don’t need party any more? Don’t we think
about other systems, then? I’m not sure if you think that
the party system is old-fashioned. This is a longer answer than I
can give you in this framework right
here. >>I am of the opinion. – I am
of your opinion. I think it’s crazy we have to
censor here because people use the
internet as a free change of opinion over YouTube, and we – they use data and facts
what has happened in the last years, and this led to the fact
that more and more people got interested into the elections to
look back at what were the promises, but also what has been
done in the last years, and been implemented, so
where are the gaps, where do the
scientists say something different than the politicians? I don’t know.>>Thank you. So we need to create more
transparency.>>Thank you for those great
talks. I would love to keep asking questions. We’re out of
time now. Thank you very much. [Applause].>>Thank you for the wonderful
hosting of our panel. We do have a longer lunch break ahead, and
I’m guessing you’ll be around, in case somebody didn’t ask
their question here on stage, maybe they will have an
opportunity. In the next one and a half hours, ladies and
gentlemen, we wanted to leave you time for networking but we
would love to see you back here at 2.
30 for our next panel. Let’s look at what the graphic artists
have recorded for this panel. Thank you very much. Let’s give
her a round of applause, please, ladies and gentlemen!
[Applause]. We will also be starting our
next round of parallel running sessions at
1530, so in the middle of the panel, you can decide how much you’re liking
it, or have extra time for networking and hanging out and
join one of the Insight talks we have planned for you at 1530, including talks by Markus
Bekerdale and others. You can find the further information in
your programme. Lunch will be served there, and through the
back, so I hope you enjoy the break and your networking time.
Thank you opinion. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. – Welcome back! [German spoken]. .
. .
. .
. .
. .
. .
. .
. Before this panel begins, we are honoured to receive a welcome
address by Alicia Bárcena Ibarra, the Executive Secretary of the
Economic Commission for look at continue
America and the Caribbean. Please give her a big round of
applause! [Applause]. – Good afternoon, to everyone.
It’s really a pleasure to be here, and what I really want to
do right now is to engage in some reflection about the situation, because I think that
in today’s world, insights
subsidises us to think about these things and questions as to whether democracy will
survive the digital aiming or not. This is really the topic. I
believe we’re now facing a time of great social disquiet. A world which is marked and
characterised by uncertainties. We’ve come through ages of
change, and we’ve entered an age of profound
upheaval. We’re still at the interregnum
phase between what we thought were certainties before, and
what we will think about things in the future. On top of that,
we’re also facing changes of such an order of magnitude that they really have an impact
on our spatial-time perception, and I think digital will also be
taking us down that road. For example, it will have an impact right up to the – throughout the upcoming century, and, on top
fit it, changes taking place at an ever
greater pace. Therefore, I say, there’s also a certain fear that
has taken hold of society as a whole. We’ve called that a tectonic
shift, tectonic shifts are of that order of magnitude because
they affect the technological revolution and the great
migration that is taking place on a daily basis. Obviously,
we’ve always witnessed migration, but right now, we are
actually witnessing mass migration. People are getting on the move,
we have refugees, et cetera. And the climate change is also
growing of closer, and then we have this technological
revolution that has changes, and that is changing us
as we speak. We’ve called it the
hyperglobalisation. We’ve got it the hyper global
isation. I will show this interesting graph to you. Hyper
globalisation has created a very, very high level of
concentration. We are talking about democracy
and digitalisation, but, actually, the digital technology
is in the hands of these companies. We’re talking about
seven companies on the whole. Seven companies that have a
market capitalisation which is even superior to the GDP, per capita GDP in
Brazil. Several look at this graph. You can see GDP of Latin
America there, and she is seven digital
companies, the most important largest companies
have 5. 2 trillion dollars, and the GDP of Brazil is – or GDP of
Latin America’s just barely above that, rises to 5.4
trillion. This era of digitisation, is it really the
era of the citizen? Sorry, I think we need to step back a bit
and think about this. We are really facing concentration
here. What is really happening? We are having few companies, and
very few governments, for that matter – very, very few are
actually having access and really taking ownership of our
information. Illegally, they’re having access
to our brains, our preferentials, our daily lives, and if I say “illegally”,
of course, we’ve given them,
accepted the turns and conditions and accept. What does
that mean to accept those conditions? It simply means that
I’m giving these companies, because those are companies that
are Clerking the information, or the government, because they
have the necessary capacity to process this, we’re giving them
access to our life. I’m giving them access to my life and brain, and I’m giving them
access to my preferences and predelictions. What will they do
with that? Well, to tell be the truth, the
political system will use it for electoral processes, for
example. They will say the voter has –
what are the preferences of these
electures. They will see how I can align
the campaign with them. And then the company sees us as
a consumer, somebody says I prefer a bottle of Coca-Cola, or
I need an umbrella when it is raining, so all of that is me.
Who really is in charge. That’s the question. These companies and governments
that are recording our personal data will have an ever
increasing influence on this. As our colleague from Kenya said,
what can I do as a citizen? You’re dazed and bewildered. The global situation is
confusing, the actions are complex. Simple normal human
beings, it’s difficult to understand. I really look at
things that really are of concern to me – getting a
job, or what does decarbonisation mean? And then
we have these global issues that are about the common good, the
public good. We are losing sight of that. The interest of the
majority are the interests of the citizenship, like, for
example, climate security. The environment, and also the
public good, public institutions. Here in
institution, you have public good. You can and go outside,
there are parks, there’s a police force that takes care of
people, and other places as well, you just go down to the
street, you go into the street, there is no public good. You
simply get shot. I believe that we need to link up much more. All of this information is very,
very intimate. I am face to face with my
computer, with my cellphone, and what
really is taking place is that we are losing the bigger
picture. We need really to link up more
to build visions, and foment ideas
together. This is linked to an increasing
sense of disquiet and dissatisfaction
in society. People are engaged and angry, and the
dissatisfaction is taking hold. In Latin America, people believe
that 60 per cent of the government already institutions
are corrupt or don’t merit to be supported, so there’s a
very, very low “tax moral” as they call it because people try to avoid paying
taxes. To tell you the Trust auto, –
to tell you the truth, technology is changing
everything, with the way we’re thinking. In Latin America, we
are well positioned. When it comes to technological adoption and broadband technology, we’re
roughly at the 70 per cent range. The problem is not technological
penetration, in that sense. We do have internet. It’s not of great quality, so
it’s still 3. 0, or 3G, not a 4 and as much
as a 5. We all consume through our mobile devices, but nothing
of these devices is constructed or built in Latin America. So it is built in China, so we
have a patent in the United States, built in China, and we
use it. We need to change, because people are starting to
feel uncertainty. Also, when it comes to the world of labour,
and Latin America, I have to tell you, that the labour world,
if we see exactly, if we look at who
will be affected by the digital revolution, then look that it will affect disproportionately those who
have a very basic education, those who are in sectors of low productivity
right now, they don’t care really right now. They would
love to have a mobile device because they could go to the
bank with it, or use it as a bank, but when it comes to
artificial intelligence or anything else – nothing. For
example, the person who is in the marketplace with the market
stall, or somebody who is working out in the fields, they
don’t care. We really have to see those and look at those most
affected, and the ways in which companies and their
organisations are changing, and labour
relations are also being changed. But there’s also a very
sensitive aspect there. There are no slides any more. These
are really my final remarks here at this point. Something that is
really complicated, and difficult to express: the
future’s been defined in our absence. Thinking about – we’re just
thinking about daily things, the child we have to take care of, dinners that have
to be prepared, but the world as you
have is – the world as such is being
defined without our presence, without our say. The only way we
can to that is by activating citizenship, and organised
citizenship has to come to the fore, and a concerned and enlightened
citizenship needs to take concern. We look at these
promises. Obviously, it’s important for democracy to have
technology because it provides access to information for the
citizen, and it can provide to systems of
open governance. But, the court negotiated in Latin
America provides a guarantee for access to information to justice, and also
for the activists, for example, who were working to defend the
environment, all of that has really been a huge effort, but
it was only able to be achieved with the help of an active
citizenship. If we do this, then we can work. What happens to our
region? Just like Kenya, we’re caught in the middle between two
Juggernauts: between China and the United States. The decision Huawei and iPhone,
personally, I say, the one that works better for me, I will take
that. That’s how people are thinking. People suddenly are
waking up and saying what? Huawei is Chinese! Most people
in Latin America really ignored this fact to date. How can we
achieve a situation where we have this balance in terms of power
worldwide that it does not simply become a
continuation of privileges, that we build a bridge towards a culture
of equality, and that can only happen through conversation and
through the involvement of citizenship, and through to a
focus on the common good, the public good. And trying to
provide benefits to the majority, and the truth is that
digital technology does not affect
democracy, but multi-lateral is affected, and those are two side
of the same coin. And inequality is also in there
in the mix. So this is a kind of triangle
that you have. That’s how you have to see the way in which the majority interests are
affected, and that will also have an impact on minorities. It will have an impact on all of
us who are very connected. So I believe that we are right
now really at a juncture, no ward
that is inundated with information which
is – in a world which is inundated with
information, irrelevant, really. What will make a difference? That is what the humanity can
bear on the situation, which is clarity. Those who have clarity
know which way to take, and that will translate into real power.
Therefore, I would like simply to end by saying that
technology’s not neutral, never has been, and those who can achieve the right balance
between opportunities or ourselves, why? In order to
ensure technology and politics are not just the art of the
possible, but also the art of making possible tomorrow what
seems impossible today. Thank you very much. [Applause]. – And, ladies and gentlemen, now
please welcome Jillian York and her
panellists on stage. [Applause]. – Thank you. I’m excited to have
this group of panellists up here to discuss the differences that
we are looking at in terms of the models in democratic countries and author tear yawn
ones. .
. .
. .
. .
. .
. .
. .
. – Machines will be able to
predict our feelings and reactions through
emotions AI. What does it mean to the state of democracy? Will models of mass surveillance
set a precedent in an authoritarianism state? How will this affect our basic
human rights? Will democracy survive the digital revolution? .
. .
. .
. .
. .
. .
. .
. – Great. Thank you. That
provided a much better introduction than I was about to
anyhow. So I’m very – thank you. I’m
very happy to turn this over to my panellists who are each going
to speak for about ten minutes, and then we will have a
discussion amongst ourselves, and then welcome Q&A from the
audience. So we are going to start with
Marina Weisband. Marina brings a diverse background to this panel having served as
political director of the Pirate Company, Germany, and serving as a project
manager, a democracy project for schools, and also is a radio columnist and sits on
the scientific advisory board of
Norhaus, and you’ve written a book
looking at how new democratic forms are utilising the
internet. So, Marina, please go ahead.
– Thank you. This is a difficult position,
because it’s such a dystopic intro on such a big
question. When we talk about democratisation at the beginning
of a new age, which is the age of information, as opposed to
the industrial age, it’s always
difficult to figure out what the important things are. In the
beginning of the 19th century, people used to talk about
basically railroadisation, and asking questions like, “Are
trains going too fast? Is that healthy for the human body? ” So when we talk about digital
digitalisation, and we mean our phones or computers, that is
exactly what we are doing, and what I would rather be doing is
to look at power structures, and how these developments
influence whole sets of people, which is more difficult, because
we live in the present, and it’s difficult to analyse the present
from the present. What I see is basically a tug-of-war, and, on the one side of it,
ironically, two players find themselves, which are private companies, and
authoritarian states. They are on the one side because
they both, for different reasons, have a big interest in
centralised infrastructure, and centralised data. On the other hand, is a global
civic society. And the civil society and the
democracy as it is called here, basically has an interest in
distributed data and distributed networks. So the main question boils down
to who do the infrastructure and
the platforms belong to? That is the question of who can
surveil us, who can access our data, who
can monitor us. It’s not a for me if
surveillance is dangerous for democracy. Yes, democracy is
definitely in danger. It is always in danger. It is most in
danger when we believe it’s a natural state, as many
do, in Germany. Under surveillance, it’s not
only the very abstract for many Germans,
for many Europeans, it’s an abstract thought that someone
could control you, that someone could influence you, or your
behaviour, or you could be put in jail for what you think. It’s
a sad reality for many other countries, but here we feel
quite safe, to be honest, which is false. Surveillance not only changes
the way we act, it changes the very way we think and see each
other, and perceive each other. The fact that we are being
surveilled alone changes our behaviour, no
matter who is doing the surveilling. I wouldn’t put so
much difference between states, authoritarian states, and and private corporations,
because, at the end of the day, there’s a lot of interchanging
there. States do get the data. If there is a big collection of
data, states do get it through legal measures, or through
illegal measures. We need to fight for possessing
the very infrastructure, to collectively own it, to build redundant networks,
and to use them. And the second thing we have to do is to get rid of this feeling of
helplessness that we have as a society, with , and that always
comes up on these panels, or like, we don’t know what to do, it’s all so big, and it’s
bigger than me, yes, but it’s not bigger than us. Where I see
the future, and I do see very good developments for democracy,
is in the ways technology enables
us and empowers us to have a voice. We are better educated and
better informed than every generation before us. We have
more power, and thus we have more responsibility. We have to use this
responsibility. That means we have to better educate each other, and we have to step
into a serious discussion about our responsibilities. There is a place I see for these
discussions, for this empowerment of the citizen, and
that is in the local community and in school, because
that’s where you get everybody. The universities are a super
select group. Market is a super select group
but everyone is in school, at least in Europe. Everyone lives in a place. The
physical space is where we can meet people and where we can
learn that, if I change a thing, it will matter. I will see it
every day, and I will see this is what I change. This is my
power. This self-em power starting from
students – which is what I do for a living, I give students
the shape to their own school, I give them power so they can
learn that they have a responsibility – or in the
community, not as we used to do in Ukraine when
something about your fence was broken, you would call the
president, but, coming together as a group and changing
something. And I think in the future,
national states will lose their
importance and what will raise in importance are
interconnected, because a city like Berlin has more to do with
London and Tokyo than it has to do with, I don’t know, some
little German city. We see young people discussing
politics, discussing ethics on a global
scale, through memes, through image boards. This is a
wonderful development, a development that has never been
here before. If we manage to come together in our little
spaces where we can see the things we change, if we talk to
our elders, to our kids, if we
discuss not things like oh, they’re getting all our data –
what is data anyway? It’s such a clumsy term – they
are getting our information. They are looking at us. They are surveilling us what can
we do about it? What can we change locally about our
behaviour individually? What do we have to change politically, and how can we do that? I think GDPR showed a political
possibility of regulation if enough people come together and
try it. Basically, I’m all for don’t be
so pessimistic. Don’t think of utopy or dystopy. Ask what can I do now and what
do I do for a big change? Thank you. [Applause]. – Thank you, Marina, we’re off
to a good start with a little bit of optimism on this panel.
Our second speaker is Marisa von Bülow, a Professor for political
science at the University of Brasilia and a fellow of the German Institute of Global
and area studies. Her research studies the impact
of digitisation on public life. Area Studies. Her research
studies the impact of digitisation on public life.
– Thank you so much. It’s really a privilege to be part of this
amazing panel. So, we were presented with the
million-dollar question: will democracy survive the digital
revolution? Simple question! If I really had to answer it, I
would be saying that probably not. Probably not in its current
form, but democracy is changing fast. The ways in which we debate
about politics are changing fast. The ways in which we take
decisions and the ways that we think about
representation are changing fast. So, I think perhaps the can we
question is: what kind of democracy will survive? What kind of democracy do we
want to survive? Before I talk more about this, let me give you a bit of context where
I’m coming from. We have a research group in
Brazil that has been for the past eight years or so studying the impacts of new
technologies on activism. We have been studying protests,
and we have been studying elections in Chile, Brazil, and now in Argentina
they have election s next October. That is a bit unusual,
because usually social movement scholars study
protests, election scholars study elections, and by doing
both, however, we are able to see the double-sided nature of
digital activism which a lot of people have referred to here. How, for instance, it was able
to empower students fighting for public education in Chile,
which, in my opinion, was a really good thing, and, on the other hand, how it helped
to threaten the integrity of Brazilian elects in 2018, which was a
really negative thing. So we are especially concerned
about whether or not that new technologies have been able to
fulfil their promise of levelling the playing field
for actors, or providing more equal opportunities which is of
course the great challenge of democracy in Latin America and
elsewhere in the world, as emphasised by the Costa Rican
President this morning, and by Alicia
Bárcena Ibarra just now. There’s no single answer to these questions about democracy and
digital act ivism based on this
research, based on where I’m coming. From that, I would argue
that the problem is that technology per
se, the problem is how we use
technology, of course, and especially problematic when we,
we the society, underestimate its potential impacts and ignore the new
challenges it presents. In the ten minutes that I have, it’s now probably eight, or seven,
anyway, let’s move on, I will mention two key challenges that
have come out of our research. The first one has to do with
false news. Six months before the
presidential elections of Brazil, we had a national
survey, and we added a question about false news, which was this question: do you
think you’ve been receiving false news about politics? Six months before the election
already it was very clear that this was going to be a huge
problem. False news was already a
important phenomenon in Brazil and elsewhere, and, to our
surprise, two in three of the respondents of the service said
no, I don’t think so. I like to think of this as “The
winter is coming” graph. It was really a warning for us,
you know? Not only false news is going to
be important, but people are not really aware of it, and have a
hard time seeing the truth or understanding
whether they’re receiving the truth or not. And, indeed, six months later,
we had an electoral process that, as in
many other recent electoral
processes, were fraught with disinformation and tactics. The lack of awareness
about false news in itself is scary and it comes
with other things: lower levels of
satisfaction in democracy and less trust in democratic
institutions. Perhaps they’re not so relevant
– who can say they’re satisfied with democracy, but when they
come together, I argue that we are facing a
dangerous loop. A dangerous loop that lack of
trust makes great targets for disinformation which in turn
eventually lead to lower levels of confidence in the political system, and consequently in
democracy. I want to talk about a second
challenge that also has come from our
research and that I think is very relevant to
put on the table which is a tendency
for societies to underestimate and misunderstand online
political activities, and I will argue
that it’s false news which is a really
important problem, a lack of awareness about it, it’s a
really important problem, but it’s not all about manipulation.
That’s only part of the story. It’s only part of what is going
on here. But a lot of people have focused
on that or have argued this online
activism is not real activism because people don’t get out of
their sofas, and they’re just clicking away. And I think that has important
political consequences, thinking of online activism in that way. It’s just not an academic debate
about what is activism, how it’s been
changing, but it’s also a much broader debate that involves the
whole of society, because if we can’t understand the new forms of online
activism, we can’t understand the political turmoil we’ve been
witnessing in many places, some of them in Brazil. I want to give an example of
this misunderstanding and underestimation of online
activism. This is very simplified network
of conversations on Twitter from, again, Brazil, in 2016.
That’s two and a half years before the presidential
election. That’s in the context of the campaign to oust the then President
Josef. Also the Twitter accounts and
the size of the notes correspond to the amount of times messages
from these accounts were retweeted, so their ability to
influence the debates. This network shows that two and
a half years before the election of the President, the most relevant
notes were accounts that were already campaigning openly for President Bolsenaro.
It was seen as the work of robots or a bubble that would
quickly burst. But, in fact, what we argue is
that President Bolsenaro’s campaign, and we have other
examples of this around the world, it’s not only annual only
case was extremely effective at mobilising cyborg
networks which by which I mean
successful at mobilising automated resources and human
resources, so agency is a key factor here. They brought together automation
and a series of digital activists who got together
through digital media and they found an exciting way of voicing
their views. Every Brazilian had a relative
that was sending non-stop messages, usually through WhatsApp to everybody’s
phone in favour of Bolsonaro, and many of
these relatives did not have a previous political activity, were not
really activists in tea the past, but through social media
and new digital technologies, they found a way of voicing
their views. And we cannot ignore that
agency, and, but I think we haven’t really given it the
importance it had. – it has. So, to conclude, it is not only
about disinformation and manipulation
– these are of course very important and we have to be more
aware of them as I argued in the beginning – but the political changes we are
witnessing are also about agency and new forms of participation by new
actors who were in the past on the
sidelines of politics. Through platforms that are not transparent, such as WhatsApp –
I just mentioned – and that in the Latin America context and
elsewhere have been used to open the doors to more
extreme ideas. Thank you very much. [Applause]. – Thank you. Perfect timing! Next speaker is Xiao Qiang, the founder and
editor of chief of China Digital Times.
His research focuses on state censorship and control of the
internet and public opinions on Chinese
social media. – Thank you. It’s an honour to
be here. And I was here the previous
panels in the audience like all of you, and I heard the word
“China, China, China” quite a few times. Here are the Chinese!
[Laughter]. [Applause]. Don’t clap too early! I’ve now been – I’ve not been
back to China since 1989. I was born in China. I came to the United States to
study physics as a graduate student. I went back to China
almost 30 years ago for what? 30 years ago today. 30 years ago today. We’re talking about democracy,
talking about democracy can’t survive. I want to talk to all of you
about democracy coming to China. This, 30 years ago, this. This 30 years ago, and this. My story is starting here. And I
became a Human Rights Act visit of since. – human lights activist of
since. I was in Geneva talking on the
Commission of Human Rights. The Chinese delegate called me –
he called me a professional liar
making lies about China. After that conference, I wondered – I went to Cologne cathedral, and
there is a person who asked me to write something on the board,
so I wrote a little poem that I will tell you the relevance
later, but, this is what I really want to tell. Even the
democracy being crushed by tanks and machine-guns 30 years
ago, but we had a new hope, which is
internet, the freedom of information,
freedom of speech, would connect to people in empower their
voices. We all know that digital promise at the early stage, so
does the Chinese peel. Some of my friends went back to
China, started a company building
infrastructures with the hope that this time we will open
China. But the Chinese government knows
that too. And they will not let
information get out of their control. And since then, I’ve been
studying and following China’s internet development and refusing that I know that
China internet called Chinternet and
China, and the great firewall. It has an official name, you can
Google it, an official website, but it
doesn’t tell you what really does which is filtering and
monitoring the information traffic between China and the
rest of the world, but also within
China, there are hundreds of thousands, you could say even millions, of internet
police who are trying to regulate the internet. However, – however – Chinese
internet N netizens are robust and
innovative to use all kind of ways to get around the
censorship. There’s a struggle contested space in the internet
since the last 20 years, and this is from your magazine,
but it’s published in the research report on my group, document ing kind of
censors on the Chinese internet. However, until 2013, I was still
optimistic. I was quoted in this Wall Street
Journal article, “We do think the internet is opening to China
after all. ” But the second episode came. President Xioping stepped into
power, and the Apple, all the data
centres in China. Let the Chinese company run it,
including Facebook. Dying to get access to the Chinese markets,
went out of their way shamelessly to apiece the
Chinese dictator. Including Amazon. Including Google. They’re still wrestling with the
Chinese authority power. The traffic of the Chinese
people using Chinese language search Google on simplified
Chinese characters which approximately equal to the kind
of persons that the number using
the VPN. Since 2015 and 2017, until now,
the firewall is really powerful. It’s not only a firewall that is powerful, but they even
developed a great canon, I I won’t go to
technical details about this. I haven’t got my real speech yet.
This we are waiting for? We are seeing the tech boom in
China, China’s largest internet market. You can get all the numbers to
say how powerful Chinese internet technology and numbers
are. I will give you one after another, particularly a mobile pay, for
example, is dwarfing the United States. I can keep on going on. And we all know today the
largest companies on the internet is between two places:
California and China. This is a third episode,
unfortunately. We didn’t anticipate this. The technology turn now is in
favour of the ones who control the
internet, who access the data, who can
manipulate the data, who can surveil the entire population, and that is
dictator’s dream technology: facial recognition,
social credit system – you name it. Here, we talk about big
data. We’re talking about the surveillance cameras. We all
know this story. I’m going to go very fast, but
particularly, I want to highlight the placeXinjiang place. This is ah place – this is a police database
screen shot, categorising people in their social relations and
the data it aggregates. This is Xinjiang. This is official propaganda of
Xinjiang. This is the reality of Xinjiang, and this is the
reality of Xinjiang. This is a pro tempt outside of
China. This is what are called the
interoperated police system database. I will not go into the
details but you can quickly read on screen what kind of going
data into such systems to monitoring the Chinese citizens,
and also establish dominance of the Communist Party’s control. And here is the list of Chinese
companies who are getting a bit of those surveillance
infrastructures. And here is some European
research ers independently found out the database that access to
millions and millions of the Uyghurs, and their official
recognition data. And we are coming to the
question: what has that got to do with the world? Well, dictators never stop with
domestic repression. They expand. And they become imperial. This is the reality we are
facing today, talking about international relations. Why did
I tell you this? Because the Chinese companies
are already everywhere. The one belt one road, those were
established in those countries. But I don’t want to end as a
simply, a pessimistic. This is a traffic on digital times, traffic really
small, but growing, growing, growing over the years until
now, despite the Chinese great firewall, it’s going down. There
are people seeking for such information inside of China, on
a daily basis at a real number. On a daily basis, there are
millions of Chinese netizens using proxies
to get around the great fair wall. I searched the – the great
firewall. I searched the Chinese internet. I searched this blog.
Why? Because it posted my little poem
which I put in Cologne Cathedral in front of a board and had been taken a
picture by military personnel as a tour I have the and touched him. Many years later, he wrote a
blog post. I found my poem again on the
Chinese internet. The hope is still there. This is Mahatma Ghandi. When despair, remember that all
through history the ways of choosing the last have
always won. There have been tyrants and murderers, and, for a time, they can seem
invincible. In the end, they always fall. Think of it.
Always. Thank you. [Applause]. – Thank you. I’m going to have
so many questions for you because I know you had to skip
through some of your slides quite quickly. I want to welcome
our last speaker. Hito Steyerl is an artist and
new media professor at UDK, covering art,
philosophy, and politics, and explores late capitalism,
social, cultural, and social imaginaries.
– Thank you. Good afternoon. So it’s been already explained
how AI is being used by governments today, and I would
like to focus on how AI predicts the future, and I would like to
do so by giving you some examples, because, you know, we
heard that AI is able to predict the future, but how does
it do it? We made an experiment, and I
want to show you how this works, because this is literally the
future. This is a Camp Fire which is
sort of future-predicted by 0. 04 seconds into the future using
artificial intelligence or more, precisely machine learning, or
neural networks. So this is literally a
documentary image of the future using artificial intelligence, and first glance
you can already see two things. First of all, you don’t really
see a lot. That’s the first takeaway. The second is the fire starts
getting out of control very, very
quickly. So, why is this, and how is
this? And what is the matter with
predicting this kind of future? And interestingly, I showed this
to my daughter who is 14, and she already knows, you know,
that I have been training this kind of networks for a long
time, and she told me, this is kind of very understandable. AI is like fire, so, it’s not a
coincidence that AI is able to predict fire very well, and what
she meant is that obviously you need a specific type of footage to train a neural
network to predict the future, and this kind of footage is very
well suited to it, but I realised immediately that she
may have made a much more important point
because I think all of you know, regardless of where you come from, that AI in
mythology is sort of representative for technology,
right? Fire is the first symbol for
technology. It’s the technology that is
being stone by humans from the immortals and then used or abused, and so on, but
fire very literally is an important date for human kind and the most important one,
because fire is the thing that by way of
cooking literally made human beings into what they are today
by developing their brains and so on, and so on. Fire is said to have been very
important in developing skills like language, social
communication, it helped to surviving colder environments.
It helped humans to migrate to figure out their ways to shape
the environment, and so on, and so on. What then is the link between
fire and AI, and I’m not going to tell
you that AI will be as important as fire in the future. Many
people think so, and many people have made a career of pretending
this to be the case, but tell you
literally, we don’t know will be right? AI is emerging right now. We have only a vague idea of how
bad it’s going to get – let’s put it like that. So we don’t
know what AI is going to look like in the future. But I think
there is one thing that seems to provide a clear parallel between fire and AI, because
anthropologists, when they study what fire produced for humankind,
they talk about something they call “time
colonisation” which means that the fire opened up the night for
human activity. Before, you couldn’t unions the night
because it was dark, and there wasn’t too many things that you
could do, but then with fire, you could sit around the Camp
Fire, tell stories, and so on, and so on. They call this time
colonisation, and I think that specifically with big
database predictions, also, you know, in terms of machine
learning, there is another type of time
colonisation really happening, because the prediction of the future is based on past training
data, right? You can only use past data
because future ones have not happened yet, so basically, the past is
colonising the future through this kind of prediction, and
it’s not only the past that is colonising the future, but
literally whoever owns the data from the
past is colonising the future. So the question is not really
about what kind of future, but what
who – about who owns the future, and that question is clearly
answered in the next slide. I think it’s a candid idea to
very open ly tell of who owns the future.
It seems that Mark Zuckerberg is the sole owner of the future,
and, in a way, it’s also true, because, you know, the database
that Facebook and other corporations own, is colonising
the future by way of this kind of
databased future prediction. But which this kind of machine
learning base preprediction also, and this is engineers who are saying that
not me – very often turns into detective nation. Google engineer put it – a
Google coder put it clearly in one of his presentations stating
that machine learning has become alchemy. Why’s that? Because even, you know, if
prediction via artificial intelligence produces results, the researchers and
engineers don’t really actually know why. The relation between the input
and output is unclear. This is called the ballot box black box problems. Some of these AI -based predictions are
divination. Divination has a history, called
pyromancy, and people tried to predict the future looking at
the flames. I think something quite similar is going on in many cases today
when we look at the this kind of database divination and whether it works.
Let’s see how this is actually produced. divination. Divination has a
history, called pyromancy, and people tried to predict the
future looking at the flames. I think something quite similar is
going on in many cases today when we look at this kind of
database divination and whether it works. Let’s see how this is
actually produced. This is the tool that I used to produce this kind of prediction, and I
call this my political pyromancer because
this is the tool that basically anyone can use to predict the future via
artificial intelligence. And you have helpfully renamed
the parameters which you can change in order to predict your own future, and
you can see that basically, this machine is not just an input tool in which
basically data are inputted which give some kind of output,
but the parameters are kind of very, very crucial in
determining the outcome. First of all, you have of course
to ignite the fire in order to start the prediction, and, if
you uncheck this box, the prediction starts to happen. And then of course, according to
the parameters, the fire will change
quite a lot. For example, if you hate this box, which I dubbed “the hate box” it
will accelerate probably a lot. The interesting thing about this
tool is that it is not real time, right? It is running in
slow motion, so basically, you have to record the outcome to predict the future so
that predicted future already is in the past when you’re able to
see it. But, as you can see, those
factors impact the outcome a lot. So this is a racial bias,
factor, which is labelled for German purposes called the
[German spoken]. If you crank this up, then I think it’s like a Valhalla, the fire is
going to basically engulf everything. This year, the 0. 014 factor is a funny factor –
or not funny, actually. It’s an existing factor which is used in the Austrian Job Centre
system. It predicts the chances for
either males or females to find new employment after they were
dismissed, and the chances of women are apparently
0.14 times worse, so this factor is being applied to calculation s whether
the Job Centre should fund their retraining, and so on. I think I
made the point, don’t click the depression, every single
stop will just break down, and a lot of
the buttons also do nothing. I think that’s very common in
AI, that there are a lot of buttons which seem very, very
important but, in fact, they don’t do anything whatsoever. Okay, so this is how to predict
the political future via pyroma ncy or AI, and to cut the role
short, and because I’ve been invited as
an artist here, the traditional role of culture is to tell you
these kinds of stories, right? We’re doing storytelling. We’re
doing fiction. We’re doing mumbo-jumbo. We’re doing a very
good job at it. If you want mumbo-jumbo, just
call me – we’ve got this covered! You should not really
trust Mark Zuckerberg in trying to sell you mumbo-jumbo for
facts, and I’m not only talking about Mark Zuckerberg, you’ve heard some very, very serious
and real fact-based example of how this kind of mumbo-jumbo is
not only being used to predict a fictional future but also to control and suppress, and
surveil populations. Thank you. – Thank you. Where do we start?
I think that first what I would like to ask all of the panellists is we
heard from the presentation before this that the fume has been
defined in our absence and she emphasised the role of clarity.
I’m curious to hear from any of you, and you all have different
things to say on this, a what sort of clarity you would like to see brought to the
issues we are talking about today to surveillance, to AI,
censorship and control? what sort of clarity you would
like to see brought to the issues we are talking about
today to surveillance, to AI, censorship and control? – First of all, I would like to
have the clarity that this is on us. We
tell ourselves the story that there are powers out there who are
mighty, and who do surveillance, they do
data capitalism, and it’s difficult, and it’s terrible, but there are very
concrete political actions that we can take, so the first
clarity that I would like to establish in this room is first
of all learn about the concrete
political steps that we can take, which I would love to talk about later, and second, perceive that you are
now an educator. Whoever you are and whatever profession yours
might be, in the new age, we are all educators, and we have a responsibility, which I think
the other panellists used their responsibility wonderful ly to
each us all, and I learned a lot, and we all need to be these
teachers. ly to each us all, and I learned
a lot, and we all need to be these teachers.
– No-one else wants to jump? – Sure. This issue about the future
being defined without us, well, of course, one reaction to a lot of the
negative impacts of digital technologies,
turning our back on this, so a lot of people have cancelled
other accounts on this or that, or there is this political scientist who used to talk about
exit and voice. This would be the exit way. I don’t think – I totally
understand it. I have done it myself, but I don’t think it’s the more productive
way, and I think there are many, many examples of ways in which people have been
incorporating digital technologies, and I mentioned very quickly the
movement where we have many, many examples of people using
either the platforms that are there, or building new
platforms, and build new places in which to
debate in better ways with other kind
of rules about politics, and so I think
voice instead of exit would be the clarity. – We now see technology is
power, and this particular surveillance big
data analysis empowered –
AI-empowered to create databases is a new kind of power who can
access to all our personal behaviour information, and turn
it into some other purpose the political system will use for their own purpose,
to manipulate, for example, elections, but what about in
authoritarian countries being used for them to further rule
their population? And prolonging the dictatorship. So the question is would
democracy survive? Democracy will always survive. It survived the Second World
War, right? In this country, Nazis
threatened the democratic democracy around the world, but
today, this country is a democracy. Democracy survived the Cold War,
right? The communist Soviet Union with
nuclear power threatened the existence of the human kind. And democracy will survive
again, this time facing the new challenge. But it is true that the answer relies on democratic
countries, like in Europe, like in the United States, in Japan,
Australia, everywhere, Brazil, to defend your own democracy
facing those digital age challenges, to fix all the
threats, but at the same time collectively
resist ing the ex-pangs of the Chinese
digital totalitarianism. Surveillance capitalism fusion
with a dictate shoreship, what I call
digital totalitarianism. Remember, this is not a clash of
civilisations. Chinese people have the same desire for freedom and dreams as everyone
else. China’s President – the rest of
the people who value freedom in the rest of the world. But us,
we remember 30 years ago, when there is a moment of freedom,
you listen to real voice of Chinese people,
we are the same, when we collectively, globally, resisting the Chinese
rising digital totalitarianism, it’s
actually supporting the struggle inside
of China and eventually freedom will prevail. Thank you. – I made the case that the
future has been colonised, and it has been colonised by past data but also
by those who own those data, and I think that’s a crucial point. These data monopolies have to be
broken up. This is not just an abstract claim. Let me give you
an example which connects to some of what my panellists have
said. Let’s talk about the data sets
used to train artificial intelligence on
neural networks to enable face recognition. So this is a very funny episode
that two of my colleagues researched. There is a data set called “MS
Celebrity” which is Microsoft Sebity database which consists of
100,000 photos of so-called celebrities. My picture is in
there as well. – Mine too!
– Congratulations! So your picture was also used to
train some entities in a university in
China in order to enable the facial recognition of ethnic
minorities in some parts of the country, so, as you
said, in the beginning, abuse of power
and surveillance by also the
totalan governments is one thing – the totalitarian governments
is one thing, but they’re being assisted by the corporations,
the data which they’ve privatised which they’ve
expropriated and stolen from people like me and you, and
probably other people in the room too, to enable surveillance
and oppression. – Absolutely. I’m so glad that
you mentioned that, and that you’re bringing it to
roar rations, because Xiao, your presentation was so
clear and clear to see the threat coming from China and its
corporations, but what I see here in many cases is democracy
is ceding power to roar rations in terms of how they – to corporations in
how they regulate speech. What role do you think that
these corporations play and who should we do in terms of some of
the issues that you’ve raised? – I agree that it is a key
topic. Again, Alicia Ibarra was talking
about Latin America and making the case about having a voice, but, of course,
the great obstacle to having a voice is that we don’t really
control the infrastructure and we don’t really control the
production of content. So it’s a huge challenge. I
don’t really have the answer to it, again, the million-dollar
question. I think as was said this
morning, it has to be a multi-lateral debate
that includes states and civil society. I think civil society has a key
role to play, and South yearn civil
Society, we have specific challenges, for instance. It’s hard for us to know what
was the impact of it will technologies on the last
presidential elections. Not only do we not have access
to the data, but even in some instances when we could have
access to the data, we did not have the resources to buy those
data sets and analyse them. So, I think that the global
south needs to be included, and the specific challenges we face,
because, at this point, it is hard even to answer
this – it’s not a simple question, but this basic
question of what has been the impact on us because of the
challenges of in terms of resources that we face?
– I would love to hear your thoughts as well particularly in
respect to the ways in which American companies or European
companies are working with China, Xiao? – This is a well-known story
that the American companies are first of all seeking China as a
vast cheap labour place, that without labour unions, labour
movement and environmental protections, with the
government-controlled lands, no private property, therefore,
when the authoritarian state sort of focused on its economic
development, they can make big projects happening very soon,
and the big corporations will access
that kind of environment under the
parking, market, right? Even the Chinese
President didn’t predict the step into the labour market somehow, the advantage – they leveraged the advantage of being a total tear
young state. Most of the wealth has not been distributed to the
migrants, workers, that are producing almost all the goods
around the world selling everywhere. They still get a
very low pay. But the state gets rich, and the
Chinese domestic police cost, is already
surpassed the military cost. That’s what it takes in an
authoritarian state to keep control of its own population.
It’s also a mistake to think that the Chinese corporations, any
Chinese major strategy like “private” enterprises like
Huawei are really private. The Chinese state companies – I
live you one example. The entire population using Wi
Chat. The internet police using the
same screen sitting in the same space, there’s no difference
whatsoever. Now, this is what we are facing,
and a foreign company such as Google
and Facebook still from a
money-making point of view, understandable,
need desperately access that market. So the power of one company,
even that they are so powerful, but
they’re no match to the Chinese state
backing to those Chinese tech giants. So, if we don’t come out of some
kind of collect ive answer between
governments and the private companies to respond to this, the Chinese
state companies over those so-called private companies, or tech giants, will
keep on expanding, taking the advantage of the open societies, yes, giving them
so much access to it. And then even on the most
cutting-edge level of artificial intelligence that Google, Amazon
and Facebook, even starting to losing their
competitive edge to the Chinese companies.
Look at the facial recognition technology. Voice recognition
technology. Biometrics – all these
artificial intelligence implementation on social
control, particularly, China is leading the world. And those are
coming out. So there is an over all
strategic consideration that needs to be there. It’s not just
which company wants to take advantage of making money, it’s
not about which country trying to to
make et cetera own trade in this global contwelve. We’re happy to see there is two
systems have different political values,
and the different political values embedded in those
political systems now both are empowered and threatened by the
same technology. So that is the situation we are
facing. Don’t just ask is democracy
going to survive? Can democracy survive? Right now, technology
is going to the hand of the controllers. That is the
challenge we are facing. – Thank you. I want, since we
are talking, we’ve moved to facial recognition. I want to
bring this back to you. I’m sure you saw the news last
week, last week or two weeks ago, that
San Francisco has banned the use of
facial-recognition technology. You can bring it to other
subjects as well, if that is the democratic measure, the extreme
measure that we might need in the case of some of these
technologies? – Well, I mean, definitely yes,
I think regulating a lot of these technologies will make a
difference. On the other hand, going back to the question of the panel will
democracy survive the digital revolution,
maybe, who knows? That’s possible. It may survive digitalisation
but the thing it will not survive is automation. This is
one thing that AI is producing more and more within societies.
It’s trying to automate governance. I mean, the Social Credit system
in China is one example of automated governance. I think
this is absolutely the contrary by definition of what democracy
is. Democracy isn’t only about making decisions in common, it’s
about the process of discussing and arriving to the decision that actually is
the democracy, and, if you take that away by automation, then it
is not democracy any more. And I think this is a temptation
which is somehow present in the European debate. Oh, my God, the
others are so advanced, and artificial intelligence, maybe
we should do it as well, and so on, so that the temptation, you know, to go
down the road of automation is present, and it should be
resisted because this is the absolute opposite of what
democracy is about. [Applause]. – I see you going “Hmm, okay,”
which brings me to my last question before we open up the
floor to questions. You’re welcome to comment on that as
well of course. I wanted to come back because you had mentioned
that you also that thoughts on the political steps that we
should be taking. >>Yes, absolutely. I do half agree with you that we
can never automate governance. We can’t automate human
decision-making. What we can however automate is
monotonous heavy work which I hope that we do, because not a fan of
that. I think it’s time for us to stop
with anxiety looking at the big corporations and think what are
they going to do? But, instead, start building. Start building
now. Because one of the main problems is that the only big platforms on
which we as a civil society connect and
discuss democracy are profit-driven. And we need to build our
platforms that aren’t, and it’s not that difficult to do per se, it’s, for instance, I
build a platform that is called Oala
that connects young people and lets them have a democratic
debate and empowers them. It’s not per se difficult, and it’s publicly funded, so it’s not
profit-driven. We can do publicly funded. We can do
community-owned platforms. The question is: how do we get
people to use them? How do we make them the main
infrastructure? Because, I don’t trust
for-profit platforms. If I search on YouTube for
cabinets where there’s been right-wing riots, after that, I find a lot of
right-wing extremist videos, because they get the most
clicks. It’s not because YouTube is evil, it’s because it’s
profit-driven so that is the logic it functions in. How do we
get people not to go to the place where all their friends
are? That’s why internet builds
monopolies, right? We use what everyone uses. None of us uses DuckDuckGo
because it’s better than privacy for Google. The political thing we can do
here is called interoperability. We can
force Facebook and whatever other big platforms there are to
have standardised protocols that allow them to speak to other platforms, and
I don’t have to be a member of Google to
talk to people, or to talk to people on Facebook, I don’t have
to be a member of Facebook. And they can read my thoughts
without being a member of my platform. So, basically, everyone keeps
their own data and the different platforms can speak together through open
protocols. But that is not something that
big corporations would do on their own terms. We need to
force them, and we need to force them politically, and GDPR
has shown that Europe has a somewhat power to force big corporations to do
things they don’t necessarily like, and imagine if it is not Europe, if Latin
America joined, if other countries, continents joined? We
are the customers, after all. We pay the money, so, I believe
we can change it through political action. We just need to keep the
democracy living to elect the people who
will do that.>>Thank you.
– I see that it is time … [Applause]. – Thank you to all of the
panellists. We’ve still got time for questions. It’s time to do
that. Is Geraldine here? There she is!
Excellent! – Hello, my name is Carl. I have
a question. I would like to ask Ms Weisband about the new
democratic forms. Excuse me if I pronounce is in a wrong term, but Mr Qiang said that democracies will
always find their ways to come back, like after the Nazis. I think that is a good approach. Because it is good to say that
democracies will always find their way, but in the digital
revolution, I ask myself what strategies do we have? We can’t expect the allies to
come to march in again and save us again. We need to think about
strategies. About certain strategies to new democratic forms, and how can
democracy survive is an interesting
approach to think about it, because we should think about
this. We have strategies against everything right, against populism, against
– but whatever, against, a strategy,
where do we develop strategies for new democratic forms in this
digital revolution that can survive in the digital
revolution. – What strategies we can use in
the digital age, most of all, how
can we use digital means to further democracies? And people have been shouting
for a very long while – I came out ten
years ago with this – that we we need
digital means of participation in political process, that we need more
direct democracy, not – more direct
democracy, and a more democratic European
Union. These are the things. It’s not – we don’t change the branding of democracy. It’s not
something entirely new. You can build a lot on the very old
principles of democracy. It’s just that we keep electing
parties who try to ignore the whole
digital change. That’s the problem. – I work for the Kofi Annan
foundation. Before he passed away last year, he – political
leaders, civil society, a part of it, trying to find answers on
how to safeguard our elections in the digital age. The
Commission is going to release its finding in January. I
thought this is of interest to anyone in the room, please come
and see me if you want to learn more. My question is there is a
movement that says that data should – only
your data should be considered a human right. I would like to bring that to
the panel. What do you think? – We’re getting a lot of good
questions, and hard ones! So I’m not even going to try to
answer that. However, data privacy and access
to data in the context of elections has been a
huge issue in Latin America. We saw it in Brazil. We are seeing
it and monitoring the elections, the upcoming
elections in Argentina as well, and the case
of false news that I mentioned, it’s also applies to data
privacy. So, people are really not aware
of the extent to which this is a huge
problem that affects the elections. So, specifically in
Brazil, but I also think in India, and maybe in
Kenya, maybe in Africa as well, but,
anyway, in cases where WhatsApp has been a really key app used for campaigning
purposes, phone numbers have become, phone
number databases have become key sources of information, so it became common
and almost naturalised to receive a message from a number, often a for
number, from someone that was obviously not in your contact
list making propaganda, usually against some other candidate not
in favour of a candidate. We had huge problems of illegal databases, telephone numbers,
but also residents, addresses, and age,
gender being used illegally in the
context of the Brazilian elections, and, quite frankly,
the Brazilian authorities did not know how to react to this.
It was a very tough learning process and it was too little too late.
We were not prepared, and when I say “we”, with it’s not only the
state and public authorities, electoral authorities, but civil
society was prepared to react to what
happened in Brazil in terms of the illegal use of databases,
and, yes, I’m very much looking forward to your report, please.
Send it to me. – I don’t have a well-thought
answer to your question, but I thank you for raising it, which
is whether we consider the owning of our own data as a
new set of human rights. But let’s go back to where the human
rights started, right? The fundamental principle of it
to treat human beings and as autonomous and with dignity, meaning we
treat each other as equals as another
member of the human family. Now, the surveillance technology
and AI power is a new kind of power that, under that power lens, everybody, your behaviour set up a data, and
that data, manipulated, can be calculated, can be sort of sorted for some someone
else’s interests. And that, even without our
knowledge, without or recognition or permission, even on a subconscious level,
think of how the technology can recognise
your facial expression. That kind of powerful technology does bring the potential abuses to
human agency and our you a tonmy. It’s threatening –
autonomy. It’s threatening us. Therefore, to guard that space,
we do have to consider the the expanding rights, otherwise, the
consequence assist that they’re losing our dignity and agency.
– I think we have time for about one more question. Do we have
one more? – I would like to answer in
German. I have a concrete question to China: the last time we saw an increase
of laws concerning data protection in China, and I would
like to ask cyber security law and others, and, from our point
of view, from it’s a contradiction between,
for example, surveillance
technology, we understand data protection
always as a public law and a private law thing. In Germany,
it seems different data protection is there, mostly
transferred to some companies which then
have data protection laws when they work
with others. Is this a contradiction, or how
does China see this contradiction? – Populations, the government
has different sessions, different
political factions has its own agenda. When you hear from
Chinese officials that there is a new law of data protection,
these are all true. Some are from our just trying to
be moving forward, more, like modern society, some of them are
going through a motion that in order to only –
but remember this: law in China
right now in the dictatorship under the
authoritarian regime means rule by law, it doesn’t mean rule of
law. Xi Jinping can rewrite the
constitution just like that and a few thousand delegates say
yes. This is rule by law in China. Under that, control of
the population is the top one priority. Protect privacy may be at a
certain degree to the society, sometimes when it is necessary,
not coming to the state security, regime security.
That’s the worst. The regime security that Xi
Jinping uses. I give you an example, say a
Chinese anti-virus company, security, most of the Chinese PCs has it. Then that anti-virus software
sitting on most of the Chinese PCs, 95 per cent of Chinese PC s can read on your computer everything, what
software installed, what information streaming in and
out. On teahe back-end, at the server
space of that company, who are
watching those data? The company, of course. And the
state. Together. There’s no data privacy at that
level whatsoever. No matter how many data
protection and private protection laws the Chinese
government passes for its citizens, they’re now giving up any bit of
– they’re not giving up any bit of control to their population.
– Amazing, you know? We’re focusing on China and rightfully so, with but there
may be a right for people to only their own data, their own
pictures, and so on, but I would also like to claim the right for my own artwork not to be used as
a decoration for the German Foreign Ministry to recruit a
delegation to board a plane to Beijing to try
to sell German arms there, no? This is also a situation which
has clearly happened, and I don’t want my artwork to be used
for this kind of purposes as well. [Applause]. – I would like to thank all the
panel. Thank you for joining me today and sharing your thoughts.
[Applause]. – Thank you so much Jillian for
being here today and doing a wonderful job moderating this
panel. – [Applause].
– So, label, we have another break coming up. I know not all
of you got to ask your questions, there were more
raised hands and we ran out of tile, but
perhaps our panellists would be so kind to answer one other
question during the 30-minute coffee break we have coming up
now. I would love to see you back here at 4. 30 for our last session on new
philosophies. At 5.30 we will be start our parallel sessions, our Ideation Labs on
new philosophy in the other room – a lab on work, money, trust,
and on governments. The last one held by one of the artists
contributing to the exhibition, so it’s a tough choice where to
go for the last session, but I hope either way you enjoy
yourselves and looking forward to seeing you back here at 4.30.
[Applause]. [Break]. .
. .
. .
. .
. .
. .
. .
. Metaphor metaphor tab.
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. . – Welcome back, ladies and
gentlemen. I hope you agree with me that this day has gone by
incredibly fast and we’ve already reached the last session
here on our stage of the panel, our
ultimate panel of new philosophies. At 5. 30 would be we will start our
ideation lab s in the other rooms along
this as well. We hope you will stick around
after the show ends here for a drink with
us to sound off the rest of the evening, and enjoy a bit more
time to speak with one another. So, we have discussed so far on
the stage how digitisation is changing our political systems,
our institutions, and our processes. But digitisation is
also changing humanity, and in some ways, redefining the boundaries of what it means
to be human. Topics such as robot-human
relations, cyborg rights and designer
babies aren’t science fiction any longer. The last panel today
will discuss the effects and perspectives on these topics,
and it is led by Michelle Christensen who is an
interdisciplinary research er at the Weizenbaum Institute here
in Berlin, joined by Maurizio Ferraris, Moon Ribas, and
Isabella Hermann. Please give them all a big round of
applause. [Applause]. – [Audio]: what does it mean to
be human in the face of fast-paced
complexity and change? Is the human are we transforming
against the background of artificial intelligence? Will we
one day leave ethical decisions to neural networks? Are you in ideologies and world
views needed to counterbalance
technologically-driven upheaval? What are the concepts for new
paradigms and possible futures in a world where humans and
machines define the world they co-inhabit? New philosophies, paradigm
shifts, and in a machine-intelligent world. in a machine-intelligent world. – So, very excited to be here to moderate this panel on new
positions of the human with maybe the most
fun panel I’ve ever moderated – a
philosopher, a political scientist, and a
cyborg activist. They will speak for ten minutes each and
afterwards we will have a discussion on stage and incite
the invite the audience to discuss with us and a ask their
questions. We will start with Maurizio Ferraris who will give his ten minutes
who is a Professor of philosophy at the University of Turin. He is President of the LabOnt, laboratory for Onotology, and a columnist for La Repulicca,
published more than 60 books in a lot of languages, and right
now, he’s working towards the
realisation of an Institute for Advanced Studies at the
University of Turin. And the Institute is dedicated
to Umberto Eco. – Thank you very much, because I
was in dizzy because I was not seeing the appearance of my
technical device without, and without them, I’m without
resources. So I’m, it’s a form of evident
case of technical dependence, and, in a
sense, I have no other words to add in
order to speak about the connection
between humans and professors and
machines. But, my subject, I decided to
speak, because we heard a lot of
worries and of talks about the problem of
privacy. All data are now in a position
of companies and who know what they
do, and there is a digital control, and
so national fort. Note, the most important problem
for me is not the control or the
surveillance, also, because, often, it’s not a
real surveillance because it’s without punishment. I suppose that someone buys a Kalashnikov on Amazon. Amazon doesn’t go to the police
saying, “Hey, he bought the
Kalashnikov”, it says who buys this item usually buys pistols,
and bombs, and so on, and so forth! So, it’s a peculiar way of
surveillance. It’s not a Foucoultian way. What
is more important in my eyes is the fact that we are working. >>we are working on the web more
than in previous times. It’s on this point that I want
to focus my talk. This is the typical claim on
Amazon, again. I’m not an anti-Amazonian, but a typical example: the fact that
there is an exploitment of people on
Amazon, and after years and years of similar
news, then we have the news that
Amazon found a new machine that can avoid to
having employees that package the
booing, so they, there are no more slaves
at least in Amazon. Of course, there were
protestations against the slaves on Amazon,
and now there are protestations the fact that there are no more
slaves in Amazon. But, the most important thing is
that this is in general the destiny
of humanity with the web, because
the web allows a kind of automation that
can succeed in a perfect way. I mean, Alexa is not spying what
we say – it’s ridiculous that someone is at home, his wife in children never
pay attention to had a he says. The only one is Alexa, but even,
this is not for a spy reason, this is
in order to implement software that
can achieve a complete automation. This creates something that is
philosophical y interesting. As you know, Keynes said about a century ago,
in our century, a 15 hours per week, 15
hours per week will suffice for any
person. We don’t need to labour more than this. But, seemingly, something
different. We have the feeling of working
15 hours per day. How so? This is the philosophical matter
that I wonder. And at the same time, as you
know, in 19 nine five, appeared an
influential book whose name is The End Of
Work, and about 20 years later, appeared a
book whose title was The End of Sleep. How so again? And of work, and end of sleep in
the same time? So there is a big philosophical,
political, and economical question. Why are we tired if the work is
disappearing? in fact, it’s not exactly the
work that is disappearing, but it’s the
fatigue. We have the evidence. Now people run, we see a lot of
people running. 50 years ago, sorry, it wasn’t
so. Why? Because they were doing work
that was sufficient in order to burn
calories. Now it’s no more the case, and people run. But, this means not disappears
the work, in a sense, we have the
revelation of the near real nature of labour. Look at this picture. We have
two animals. There is no special
gender-driven idea of putting a lady. I can put myself. Imagine myself, – Let’s dive in the narratives
that are established in those fictional stories.
– Those narratives don’t present
us a positive future. It’s rather the exception. – the Asimov story is
anthropomorphism which means that AI is presented as
human-like. You see in the corner, this is
the film version of I Rowe bot by Asimov. This is the robot which is
really human like. And you have films where the
robots are really played by humans. If you think a little
the about that, it’s super ridiculous. Why would a robot be
played by a human? This shows us again that it’s
really about humans and human dramas in stories for humans. And then we have another strand
which is artificial general
intelligence, AGI, which is about AI rising, about
singularity, about conscious systems, and this is rather Hell 9,000
from 2001 space odyssey. It all ends up with a war or a
fight. Humans against machines. And this is also reflected in
the real discourse, so just remember
those guys on the slide. Thousand we have again Sunny, but he’s the template for
an AI system, so an AI system is
epitomised by human-like robot. We have Elon Musk building on
terminator fields, and then the guardian illustrating those
terminator fields with the terminator. Now, I guess I mention ed
terminator five times – I did my job! And we have, which I like best, Kissinger’s article how the enlightenment ends, and Hell
9,000 mixed up with Masonic signs so there
must be a conspiracy of AI systems going on there. But those simplified far
difference distract actually from the fact that it is not about humans versus
machines, but it’s about social power structures. It’s about humans in society and
in political systems, it is about
politics. So, AI can be used to establish
surveillance states that fulfil a certain ideology. AI systems can be used to
establish or already strengthen a system of digital capitalism
where the social good is no longer de fined by democracies
at best, but by platform or social media,
or internet companies, and with the use of AI, people get scored
and discriminated, so we all know this kind of data algorithmic, or automation
biases, and this is relevant. So, I would say this is not
relevant, but this is relevant. To end this, I would say there’s
no divide between humans and machines, and there are no
human-like machines, and there is, in fact, no inherently
ethical or just, or good, or whatever
AI, it’s behind all technology, even with AI, there are people behind it who
create building it according to their world views. In fact, it’s our task to see
that this development of AI is on us, so
we as society, we as kind of politicians, activists,
whatsoever, we decide on the course, the technology might
take, so it’s not outside of society. Thank you. [Applause]. – Thank you all three very much. In terms of are we seeing a new
human condition, is there a news to of the human, we have humanist data as
capital, how maybe we became the
batteries of the apparatus that we built ourselves, and I like
the full-time consumers, if we can get back to that. We have human as extend ed trans
species evolving free from limitation, consensus, biology
in a sense, and we have humanist humans, if I
understood Isabella right. Is that maybe in the end
correct? If you have to answer the
question, are we witnessing a new human condition, you would
actually say no? We just have we like to tell ourselves
stories and machines. – Machines are there to assist
or enhance humans. It’s not that they take the place of humans.
This is actually the main argument, or the main thesis. – So are we changing position in
any way? Are we evolving with that, do you think? Or still
playing power games with our toys?
– At least you can see that in the discourse, so, if you kind of
talk about – there are humans against
machines, so, will you machines be so creative that we don’t
need poets any more, that we don’t need artists any more? And this kind of discourse
doesn’t lead us nowhere. So it’s something that kind of
is within society. I mean, actually, we’re not that far
away, I guess. It’s something that happens
within society, and it has an effect on
power structures within society, but I
would say that this whole human versus machines is a distraction
from what’s really going on, and I would say that also Sol kind of silicon – some
kind of Silicon Valley pioneers, academics, artists, they really
play with this because they really want to distract that
there are always humans behind it. This is kind of a power
game, you know? But, in fact, it’s always on
humans, and humans playing those power
games.>>Maybe we’re asking the wrong
questions, because isle going to bring that back to the guy who had the
lab for Ontology. I come from a design background and I would
say we always evolved with technologies. Our brains evolved, our senses
evolved, physically we evolved, and so on. So we always designed
things, and then those things always designed us back as well, so hence also the changing
ontologies. Would you say that we are witnessing a new human
condition, that we are moving into becoming something
different than what we have from the ideals
from the enlightenment and humanism, or is it more that we
are framing that as a narrative right now? – I would say that there are two
main views on the relationship
between humans and technology. The one which is a very
widespread view says that technology is an
alienation of humans. We are humans as such, we are
naturally humans, and then enters
something different which is the technology which is bad, is a kind of gift,
useful, but also dangerous, and transform
us, leads us to think, to do things that
we don’t naturally do, and so on, and so
forth. I have another idea, that
technology is basically a revelation of what
humans are. Technology is a part of the ontological, if you want to
speak about ontology, of humans. Think about the enigma that the
Sphinx makes. It says what is this kind of
animal that in the morning with four
legs, and at noon with two legs, and, in
the evening with three legs? This definition implies the
technology is a human definition, because
only a human is such a kind of animal
that needs a technical supplement. I am full of
technical supplements. For instance, without this, I am
a living cyborg, but it seems that
the very beginning – since the very beginning, and we are all human
cyborgs, because if we were in the
savannah with lions running behind us, we cannot make a conference on the future of
humanity, and developing languages, culture,
and so on, so on. This makes part of technology. So, in a sense, human is since
the beginning a posthuman, in the
sense that that is a post animal. Animals are usually well formed,
they have, their milieu, and they
know what to do, and therefore, they need
less technology. It seems since we are in Germany, I will quote the famous dictum of
niche Niets zche who says the animal
is not yet stabilised. Aristotle says that humans are
naturally speaking animals, and the mobile phone, so this be,
naturally social, and social networks will
show this. That’s all. – In the end, is there a
fundamental change happening, or is it just
continuous evolution? – As usual, I make a long
discourse without making a – it’s all the
same. [Applause].>>Let’s bring it to the
self-proclaimed trans species, very nice.
– I wanted to ask you from your perspective also about since
you’re also an activist for cyborg rights, so, if we take
this as an evolution, as we’ve always continue to evolve, as
you also argue, and now we just are using the technologies as
you said, not just as a tool but also as an art, can you
talk a little bit about cyborg rights? It is interesting from
the images that you showed also, it reminded me, you are doing
this when you operate things, doing it in a medically safe
way, right? So you’re not DIY, body-hacking,
you’re doing it with medical help? – That’s the thing, not really. It’s not illegal, you say it in
English? But I guess it’s a-legal. They
have no laws against it. There is no big society doing
it. There are no big laws about it, which is why it’s there. Actually, what I was commenting
was that my friend, he actually, he
had to have surgery because mine is like the cut under the skin,
so, it’s okay, you can do it in the very small
place, but actually he has Anuška ten in a implanted in his
skull. I don’t know if you know – he
has this antenna implanted in his skull
that colour into sound, so he can
hear the colour turned into sound. When he and, actually, he is
like, for me, when it is like to be a trans species because you
have a new sense, or a new organ, like his image,
because he actually has Anuška ten in a and
they belong more to the animal world, so he has this new body
part that belongs to the animal world and that’s why we like
transforming a bit human species, that’s why how we feel. And, actually, when he went to a
hospital and asked to a doctor if he could be implanted in the skull,
they said you have to talk to the bioethical committee, and then he to talk
them, and, at the end, they said no. Actually, they gave him
three reasons. One, that it wasn’t necessary;
the other one that it was dangerous; the other one is they
were worried about the jing of someone walking as a normal guy
coming up with Anuška ten in a. Then, actually, we found that
there’s a similar, like a parallelism with
the try and gender community in the 1920s and the 19 thirties,
if someone had the body of a man but wanted to be a woman because the identity didn’t
reflect the body they had, if they wanted to have surgery, the
hospital said the same thing that it wasn’t necessary, that
it could be dangerous, that they were worried about the image
going in as a man and coming out as a woman. I hope in the future
it would be normal to have new sensors and body parts and
people accept to use technology not just for medical reasons but
also to experiment and design your own perception. And the cyborg rights, okay, we
defend the cyborg rights, like the
rights of creating your own senses and your own organs. Also, yes, maybe imagine like
for now, the searches that we do, it’s
just like people that like, or like nurses, or, actually, the
first ones I did, it was the body hackers, those people that
modify the body. I mean, many people that have
more extreme things in their body. I’m more, I feel like I’m more
like a mind hacker than a body hacker. My aim is to modify my
per cent edges and my mind. In order to do that, I use the
body. The aim is not to modify my
body. I think also for me, it’s not,
you can identify yourself as a cyborg even if you don’t have
technology, like a friend who doesn’t have any technology in
the body but he says everyone is a cyborg because we have the Hubble, the
satellite going around the Earth, so we have a third eye on
space. That’s why I think it’s an identity, and even if you have technology
and in your body – you may not feel
it’s a cyborg. – It becomes very interesting,
real, political, legal questions when it comes to these things like exactly,
with the transgender as well, as well as the relationships.
– Yes, I didn’t talk about the rights, either. I was getting
distracted too. We think that when if as a
society they union night with technology more and more, then
we would need rights to protect with the people that unite with
themselves. Actually, we presented some
cyborg rights some years ago, and we thought like especially
if you have internet in your body, like one of the dangers is
that you can be hacked or not, so, for example, there’s a law
pretending, allowing who enters into – protecting who enters
your body or not, you cannot be hacked. Also, for example, if someone
pulls the antenna, they consider it a physical aggression, not
just an object. Here in Berlin, there’s this
person that actually had the cochlear
implant and he wants to hear beyond the human range and he
can’t do it because this implant belongs to a company, not to
themselves, so it’s like these rights that are making if
someone is in your body, then it should belong to you not to a company because then they
have , and there is another law I
don’t remember.>>It’s an interesting idea of
bringing it back to the discussion of going beyond
humanist ideas or not, since that is what these things come
down to. If it is still protected by the
basic ideas and the Universal Declaration of Human Rights, or if you need a
universal declaration of trans human rights in the future. We
will see when it gets brought back to those political
questions. So one of the questions also in this briefing
is where will – I say maybe should – we leave ethical
decisions to neural networks in the future? Isabella says we
shouldn’t. Is that correct? Because, actually, that’s anyway
always humans behind the system. Since there’s increasingly of
course machines which are learning and knowing, and following, you worked a lot
with ethics and AI, maybe you can say something about also,
because it relates maybe to this bodies opening up, bodies can be
hacked. We have that right now with
digital pacemakers, you can hack hearts, so we are opening, as we open up
technologies we are opening up ourselves and our bodies to
technologies. These questions in AI and ethics come up a lot.
What are the considerations we have to take from your work in
ethics and responsibility in machine learning? – Yes, you said we should not
transfer kind of decisions to machines. I mean, we should not, we must
not, we cannot. I don’t really know. But the question is
actually that we should be aware that there are humans behind it. So, I mean, you could even say,
I mean, really, this is the first thing
to get, because people always think that you have kind of an
AI system or a machine that it learns from data, and that it
somehow is objective. And it’s not. So you could have this example
which is that my example, I took it from someone else, but anyhow, you never can
build neutral technology at all. So even if you say kind of build
me a table, so this is the easiest task, right? Build me a
tail. So you can build a rectangular
table where you have Patriarch sitting at the top and kind of oversee the
whole dinner, or you could build a round table where everyone can
sit and have the same status. So, just imagine this with a
table, and then you have AI systems, so
it’s really who selects the data? What kind of data is
selected? Are the programming teams diverse? Are the coders
diverse? Sometimes, they don’t even realise that there is bias,
in the data. So that is the first thing to be
aware on that. I mean, there are so many
examples, you know? There is kind of this guy with dark skin was
categorised by Google Photos as a gorilla
because there weren’t enough people with dark skin in the data set, so, it was
recognised by that. This is actually the first thing that
you have to be aware about that. And then the question is how to
regulate it, and I’m really pro regulation. I mean, I think that in a world
where we believe in democracies, I
mean, if we change kind of our conception of good governance, I
mean, we have to, or can decide otherwise, but if we blessing that good governs is kind of
governance is provided by democracies, I think it’s really the responsibility of the state to
accept some rules, because right now, we have a lot of ethical guidelines by
companies. I mean, Google has its own
guidelines, there is partnership of AI where
they all, Facebook, Amazon,
whatsoever, and this does not suffice, because
it ends where this kind of ethics
contradicts with profit. If you see the first ethical guideline
of Google, it’s be socially beneficial. So, do we want Google to decide
what socially beneficial or not? This is a commercial company,
and they have all the data, so they can
kind of decide what to do with the data, who’s going to be
nudged, what kind of information is provided to a person. And yes, I see kind of a
philosopher Metzinger, a German one, a
member of the high-level expert group in the European Union.
He’s talking about ethics-washing, that you have
all those companies really engaging in this these kind of questions to really try not to
have state hard regulation. I cannot tell how to do it, but
just to be aware there is no neutral technology, and that we need somehow
regulated if we still want that we live kind of in a world where
democratic systems, if we believe in that, count. Bringing it back to state
regulation, and that being capitalised on, and that’s the
new role. You had a tactic of consumption. Is there a way that not just one
thing is at government level that we can make regulations
through policy. Is there also a way as humans we can hack back? You had an idea on consumption –
24-hour consumption? What can we do there?
– If it is possible that something to the question of who decides? Because, I perfectly agree, we
have to be conscious that behind the
machine there is a human decision, and
that is a problem, not the machine. Because, when you speak
about the human, we usually think of when we compare human
and machine, there is the stupid machine, and the human
full of virtue, the kind of ethical
lawyer, which is not the case of human decisions are
biased by prejudice, by idealogies, by
other factors, and it’s in this point
that we have to reflect of more than
saying, oh, the machine will take the power. Because, I’m completely, and I
come to the other question, I’m
completely sure that the – that there are two
anxieties for the contemporary humanity. The first is the machine will
take the work and the second is the machine will take the power.
The first is perfectly right. The second is perfectly wrong.
Machines are not interested in taking power – not at all. Forget about Hal, and so on,
because, in order to to need power, you
need to have a body, only an organic
body could have this need of power, or can be bored,
or have this immediate necessity of
consumption, because there is a basic
difference between, for instance, take two systems that uses energy, that
burns energy. And dissipates it. The first one is a hairdryer, so
a machine, even if it is simple, and the second one is a duck. Say, from a morphological point
of view, they’re quite similar, but, if
you forget a hairdryer in a room for
one year, then you get back to the
hairdryer. It was off, you make on, and it works. Which is not the case for the
duck. This basic difference makes it
that the organism needs, kind of urgessy, therefore, I say that consumption is the only thing
that cannot be automated. What to do with consumption? We have to understand that,
first, with your mobilisation, because every
act we do now is recorded. This is in a sense fine because
it allows a full planning of economy – the
dream of Stalinist economy now can be accomplished, and it reduces prices – a lot of
things. But what to do in tea the moment
there is a gigantic difference. The information that you give,
and the information that you take. It’s a difference that as some
analogy with the difference with the
plus value, when the classical worker in the
factory works five hours for himself, and another five hours for the owner
of the factory. This was the classic markist
value. Now we are not even aware of
working, but we produce value. I find it very well, because you
can’t say this alienation is the same
alienation is to work in a factory doing
always the same, or staying the weekend
looking at a series of Netflix without
stop. Yes, it is alienated, but it’s
not the same alienation, and it’s
producing value. What is important, for instance,
in the previous panel, it has been e
evocated the case of China. China has a central it has a
billion of smartphone users that makes
everything with smartphone, and this means
a gigantic centralisation of data. Which is the real capital of
21st century. And this allows China to
distribute the plus value. Of course, in an oughtarian way. I believe – in an authoritarian
way. Europe is half a billion people. I believe that is the European
Union made a discourse to the GATT, we are perfectly fine with
what you do be but, you get a lot of
money, what about socialising part of
this money so that we can improve a
new European welfare, I’m sure that
this problem of populism that is so strong in Europe, and especially
in Italy, can be solved by this.
– Okay, because, exactly, I wanted to come to what kind of, one of the
things in the guide to this panel, what
kind of futurist societies we can imagine, and you’re getting
very much on to that. Of course, also, in times right now, with
the future of work discussions, where, you know, the
speculations that 60 per cent of jobs will be automated, taken by
AI, and the internet of things is tripling in profits, and all
these things are happening at the same time.>>Yes, I’m sorry, but in
principle, there is a very, very few jobs
that cannot be automated. I believe that it’s silly to watch a – automate, so, if you are a
player in Bundeliga, your work is guaranteed, but not the majority of humanity is
not a champion in Bundeliga. So what to do? – Bundseliga. What is
interesting, even if we don’t work, we produce value.>>We just need to get the
profit part – we need to get part of the profit back. – Yes, if we conceptualise the
work, not just as fatigue and alienation
but as a productional value, then we can recognise all these gigantic
productional values given for me now as work, which gives dignity to
people, it’s not just a salary because –
and answer the question what to do in the moment in which the works that
really are needed, not automated, an
immense minority. In a sense, we’re now coming to
a society, because we have often a
bad dream, a nightmare about the future. I’m not so optimistic, but
consider that a full automation can
create a society who is a close imitation
of classical Greek and Roman
society, but they had the slaves, and we have the
machines. All the work was done by slaves
in Roman aristocracy, and then the
other, what they deciding instead of
machine, cultivating maybe love affairs,
I don’t know, but also it’s not bad, but in any case, a fully automated society
can be closer to this kind of, say,
classical idea which was based on a cruel
reality of slavery. In this case, there would be no slavery, but politics has to
think the way to distribute the value,
otherwise we are going to wars, and social
hate, and so on.>>The question is we get a new
class system on who has fast internet
and who can upload more. Isabella? – Yes, I agree, and I wanted to
jump in there because I’m criticising always this dis topoan discourses but I
tend to engage in them. With the future of work and
societies, I think it’s so interesting, because, in fact, in the media always we
have then a robot, right? A robot is stealing my job and
the best is a robot with headphones which
is why does a robot need headphones is
that software’s going to assist us. There is a 3D work dangerous,
and I would think that – but, I mean, there’s this kind of
repetitive work. We don’t want to do, and
machines and systems could assist us that we
can really engauge in the interesting stuff, and when people are so afraid
that their job will be done in the future by a machine, what
does this actually mean? It means that I think right now
I do a work that can be done by a machine. I mean, isn’t that
horrible? If I have a job, which is so
dull and repetitive that I’m afraid that a machine can do that, so, so I
think we really need a whole new discourse that this liberates us, and I don’t
understand why political parties don’t really get into that kind of
better future, new visions, with tech,
but they really enter this this fearful discourse. They say oh,
my God, people losing their jobs, what do the people do?
Maybe we also should redefine what status in society means.
Maybe if I do some social creative work, it should be
rated higher, but this is kind of a process that
needs time. But I really maybe, I need to
found a party, I don’t know, but it’s
really a don’t see why whole programmes
are not directed towards this better
future with tech. Let’s liberate us from the
hardship of dull work. It doesn’t happen, but, yes.
>>There was an encouragement. Maybe now it will. [Applause].
– If I found a party, please join me!
– I agree, actually, with what you say. It’s because of fear. No fear of the unknown, it’s the
– it’s with immigrants, or refugees, that the people have
the fear that they will lose the job, but they won’t get the
job, it’s just the system that doesn’t protect the peel. I also
think they need to change their political system because it’s
not adequate for the times,
definitely not.>>Which probably also comes
back to the discussion of what it is that makes us human
anyway, because that’s probably these old narratives of what we
think it is that makes us human in the same case, also, with you
changing your organic body, and people then thinking that that
is not human any more, and so it goes into posthumanism and trans
humanism. I wonder if that is what made you human, and you
seem pretty human to me, still. If it is not rather changing the
definitions of what we see as being human. You know, is it
your empathy? Is it your, what is it but that
… – I guess maybe that it’s more individual, that I think well,
we talk about also, in the future, would be much more
diverse now. And more or less the people can
identify in other ways, not just men and women, but there are
many ways of definding your gender, so maybe also if
you’re human or not, 100 per cent human, if you like the
technology or not, so I always imagine that in the future, it
would be much more diverse, and then we need to open our
minds and our laws, and like our systems
in order to to allow all these new
entities to exist. Yes, I was going to say
something but I forgot. I need the paper like you! – You need a technical
assistant! Always make technology.
– Maybe you can build it in your hand! – Okay, we will open to the
audience for questions. We have another 15 minutes. Here is a
question in the front. There’s the microphone coming to you. – Carl Carlsson. I have a question: who is this
next to Mr Ferraris? And maybe also Dr Isabella
Hermann. When you talked about this long , extended chain, we see this in
every situation nowadays. For example, when we get out of the
train, and there are lots of people in front of you who are looking at
their smartphones. They’re looking downwards. This is like
an extended chain. They don’t see the reality any
more, even though if you’re 50 centimetres apart, you don’t see
the person in front of you. My question is the digital future
is that our future of the lost
human values, for example, respect, or yes, mutual respect.>>I can directly answer. Well, you see, usually, the
there is a typical moralistic stance towards this kind of line of people as
staying under a total mobilisation, like it
was a military situation, you can
think it as a kind of military situation. And if it is so, I could say
well, on the one hand, in fact, we have
several examples of humans that were
guided on under a total mobilisation also
without smartphones, so that is not
surprising that humans can act this way. Humans reveal themselves acting
this way. We are not people full of ideas,
original, independent that at a given moment receive a bad gift from
the technology that transformses our lives. We are so – therefore, we use this these
kind of devices. What I would say is since humans
are dependent on technology, then we
have to recognise this as a matter of fact. What is human is human, it’s a
question that all requires that human is an organism integrated
with technology. Without technology, you don’t have humans, so, instead of this kind
of technology, you can put other technology, for instance,
rivals, rifles, but the result will be
the same but even worse. So the future of humanity is to increase its own capacity of
being free, and in this increase of capacity
of being free, the freedom from
work that is given by the automation is
the most important moment, because, for instance, to make an example, now, you
say, on the social network, people say
stupid things or racist, or sexist things, and
say we were hoping about the collective intelligence, we have the collective stupidity. Of course, humans do not become
to be stupid with the network, they
reveal, we reveal ourselves by means of networks. But if we have take to cultivate ourselves to free ourselves from
stupid and repetitive work, then we can
attain this kind of freedom that can
only – I agree with you, it’s an ugly image, this series of
people marching like soldiers looking
at their handy, but, on the other hand,
think about the Cultural Revolution that can grow from
this situation. [Applause].
– I just got the end of it.>>Sorry? – Sorry, that I think it’s about
the technology that will make us more empathy or not, but actually, I
think, maybe it will create, it will give us more empathy,
because now that if we have more senses to understand
berth the Earth, for example, then I create more empathy
towards the planet. If I get a new sense another
animal has, then I create more empathy to the other animal
because I understand better how they live. I think in our
society, there are more people like they have more empathy towards the animals, like more
and more people are vegetarian, more
vegan, so I think in the future people will respect of more the
other species and the planet. There are people that don’t but
actually I think also it becomes a point, because it’s not that
productive to be that way. They can’t win that much money, so
it’s a bit of the political system,
but I think, actually, technology can help us to create more empathy because
then we can understand better where we are and where we live,
the other species and we can create more deeper connections. – If you have those people with
smartphones but you don’t know what they do, they can post some hate
speech, or talk or write to their mum. I don’t know, read
newspaper articles, or play games. According to my
knowledge, there’s not really a research kind of which proves
that now the ethical standards are lower. I mean, actually,
quite the contrary. I mean, when you have surveys
with young people, they tend to say
that kind of certain ethical standards are higher, actually, and these are the
digital natives, but what I think is
kind of a problem, but this also can be regulated, is kind of the scale, because
you always have an echo chamber or filter bubbles. I mean, in
the past, it was a village, and there was just one person who
went out and came in and told what he
or she experienced. When you have human resources,
there were always prejudices Shrekking the people, but –
selecting the people. Right now, it gets automated
which means you have negative feedback loops. It’s a question
of scale and dimension. This is what I see the problem
with the problem with the new technologies. This is a thing
that you could regulate, and this is not that I’m always kind
of speaking out for regulation, I mean, this is really an achievement of our society that
we have institutions. I mean, technology always was
regulated in the past, and will be in the future, and maybe, yes, we maybe need to
solve the problem of dimension and
scale, but there is kind of, I don’t think that there is people are less ethical
any more. I don’t see that.
– Here up in front.>>Me? Okay. I have a short
question, rather a practical one, actually, and
it’s if you go to a hospital to have anything done, do you have to carry a sort of
like organ donor ID to let doctors know that you have
implants that they might not be aware of? Because, obviously,
that can be a health issue if you’re getting an MRI, or
something. – Well, it’s a good – I don’t
have that I ID, but that’s a good idea. That could be the thing.
– Yes. – The things that we put in,
it’s usually like, it’s compatible,
and it’s not like this. Yes, you’re right, I should have
this. I propose this to the studio! That would be cool, yes. – – Ify question stems from my own
personal experience. I think many people my age, or at least
I can say the first time I connected with technology, it
was not very well thought through, I just sort of slid
into it. I was wonder ing in the very
intimate way that you connect with technology, I mean, it’s a new sense for you,
it’s an organ, how do you reflect on how that changes you?
How do you design and pick a sense? Like, I understand what
you said about wanting to be connected to movement, but how,
so there is actually like two aspects to this, how do you
design it from the technical perspective, and then the other thing is how do you
design it, like how do you go through this process of figuring out, like, if you
have a broad sense of what this sense, the seismic sense should
be, how do you figure out how it’s supposed to work? How does
that shape and change you? How does that process work? – Yes, it’s a very long process. I think the hardest thing is to
find the sense, not as many things,
and also like from the foundation, we try to think the beautiful thing is
actually to, that everyone can choose
what to sense. For me, it makes sense, because I was doing
choreography. What would you like to sense? There are so many
things. For me, this is more exciting that everyone – I don’t
think everyone should feel earth, everybody should
feel what they want. For me, it was a long process, because I goes I was in college studying
movement I knew that I wanted a sense that was related to that. And then yes, I did other
projects always related to the people’s movement, and then, I
don’t know, I got fired of people! I didn’t want my sense
of movement to depend on others, and that’s when I had the image.
What if I’m alone in the planet, and then I realised that it was
this huge movement and most of the time perceptive, and most 99 per cent
of the earthquakes we don’t feel them. This idea of being
connected to nature with love, the movement itself,
and this new dimension. And but this is a very personal,
because I don’t know about technology. I always think more
about the experience, and then I go to an engineer and say do you think that’s …
actually, the technology that we use, it’s very simple. We’re not like super – I mean,
for example, the first things I did
with the people walking was the sense that you used to dry your
hands, for example, and how I use it, it’s very simple,
I ask to all the data from the
seismograph goes online, so, for an engineer to make every time
there’s a new data to make something vibrate, it’s nothing
extraordinary. It’s instead of doing technology for your phone, or for your cars, if
you add the technology yourself, your
experience changes. It’s a long process, sometimes,
we have artist in residence in our lab, and sometimes it’s more about
analysing yourself and I have a friend
that he has, he’s developing like a new sense that detects
the atmosphere, the atmospheric pressure, and this this changes,
you can know if it is going to rain or not, so he’s like the
weatherman. He can feel the weather. And then he analysed what he
wanted to understand. He was upset always with the rain and with the water, and through this
he – so I guess it’s a bit of the the
journey if you. So it’s so experimental, no-one has done it
before, like, for example, the first time I had the – we call it exosense, because
it was outside, I put it here because
it was far away from my vital organs, then I put it up, up
after putting it on my arm, and then I did the implants here
after, and then I realised I didn’t make any sense to feel earthquakes in
my arms. I thought if humans have this new sense, they would
have it on the feet which is the part of the body which touches
the floor. I change it. So everything works
experimenting, and, yes, sometimes, I feel like I
felt a lot about what to sense, but I
didn’t think too much how to feel it. I knew I would make sense of the
vibrations, but then about the organ design, my friend has – Neil has
the antenna. For me, it’s not something that has to stand to
be outside, so, for me, I kind of like it that it is hidden
because I don’t need any immediate, so I can feel things
that happened far from my body, so it doesn’t need to be
outside.>>Are you open source cyborg.
If I wanted to feel earthquakes. Is the code online? – Yes, although online has all –
online, for sure. You can check. – Last question there? – I wanted to complicate the
dream you all voiced about a
free-of-labour society with automation, and to keep it short
because we’re running out of time, over there there is a
beautiful graph in the art section that illustrates it’s not just a
human being behind technology in sense of designing programming
and so on, there are many human beings behind technology that do dire labour in terms of
data, controlling all the bad stuff that emerges on social media, mining
rare earths, assembling our technologies and building the
technology and so on. So, when we think about how technology
changes the human condition, we need to look at a global scale
because right now, unfortunately, on a global scale
in the current global circumstances, automation is actually more
expensive than human labour is in many parts of the world.
– Yes, on a human scale,
automation is less convenient than humans, but this will not be the situation for the future
centuries. All direction, if we have to think about the direction for
humanity, and I believe that is necessary to have this, what is the direction for
humanities, and increasing automation, and a
decreasing human labour. China is an example for this. It is, China the beginning was
the typical outsourcing for basic works, not
automated, and the time would be less and less this and more and more
digital labour, and production al value.
I’m not saying – of course, if you go to Africa, it is not easy to
find people who run in order to burn calories, but this
would happen if there is a December destiny for humanity – the destiny for
humanity. We should hope, and we should prevent and organise in order to to
distribute justice. I don’t see any, of course. We can say there is a lot of
interest, there are a lot of places which
it’s not jet yet so, but BBC not to
simply say what there is, but what should
be, and what should be in my eye is the
end of labour.>>Thank you. I would like to thank all the
panellists very much for their insight. [Applause]. – Thank you so much. Michelle,
what a fascinating panel to end things on today, with the
question if we’re going to be choosing our senses in future
like we choose tattoos today, what and where to put on our
bodies. I think you have a lot of food for thought to leave the
room with. We hope you’ll join us for a drink. Before we leave,
let’s take a look at what our graphic artist made out of this
panel. I’m curious to see. If you’re interested in this form
of documentation and other forms of documentation, as I mentioned
earlier, we will be posting the videos online of the talks and sessions held here
in the Weltsaal, and also be releasing a printed, or written
publication for all of you who attended, so if you
didn’t take enough notes, we did that for you, and we will of
course be sharing the graphic artwork as well. Let’s have a
round of applause for everybody, the translators, the
documenters, everybody who helped do that today.
[Applause]. And, of course, we would like to
thank everybody who made this event possible as well from the
team, all of our speakers, the moderators, and everybody who
contributed, and all of you for coming today and being part of
this first edition of Future Affairs. I hope you agree that
it was a very interesting day, a success, and I hope it will be
first of many to come, and I look forward to seeing you in
future editions. Now for a drink. Have good good evening.
– Thank you very much. [Applause]. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
.

One Reply to “Future Affairs 2019 – Livestream”

Leave a Reply

Your email address will not be published. Required fields are marked *