Things are looking good for me in this initial vote. But…this is Yurizono Shihori we’re talking about. I can’t even imagine what she’ll try to throw at me. So the setting is…an AI experiment, huh… I feel like it’s
a little weird to have the option to not even turn it on. Remembered: “Don’t turn it on” Isn’t it just there for show? It doesn’t necessarily mean anything. …I really can’t imagine that’s the case. Would Alice ever do something pointless like that? No… It wouldn’t be out of character for her. I might not have to worry about it too much…
Anyway, let’s hear what my opponent has to say. …Yo, you guys gonna say anythin’, or? We’re never going to get anywhere this way. I wanted to hear more about her stance before getting into it but… Please, go ahead, Agree Side. …The heck? Hey, um, Monkey-san?
Are you not going to represent us? Who, me? I just kind of picked at random… …at random? I just think we can leave it up to the AI. If somebody breaks the law, they’ll get punished accordingly. In other words, we can live safely without having to do anything. I don’t plan to do anything, or have anything done to me. That’s one way to live, right? …That’s your motto? It’s how I think, yeah. That’s…a pretty subjective way of looking at it… Anything else? Nope. That’s it. You’re done?! So she was just putting her own wants ahead of everything… She’s not lying. H-Hey! So could I give my opinion instead? Of course. Be my guest. So, uh, first off, it’s not like we won’t have any laws or rules, right? You don’t think so? I don’t consider a lawless land to be that odd, myself. Remembered: “Lawless Land” I think it’d be better if we came up with the rules ourselves. Humans should decide humanity’s future. But I mean, we’re just regular schmoes? You should know what you’re doing when coming up with wording,
so that it’s easily understandable. If people start exploiting loopholes,
that’s on the people who make the laws. Remembered: “Responsibility of the Those Who Make the Laws” …Well… I agree. Whose side are you on, exactly? Look, 2+2=4, no matter what you want to argue. So… However, even though I agree,
that doesn’t mean I think we should leave everything to the AI. Humans are the only ones who can understand how humans feel. What an idealistic thing to say… Idealistic? Don’t misunderstand. An AI is just an AI. Can we really leave these kinds of
emotional decisions to a cold, unfeeling machine? No matter how many patterns it analyzes,
there’s no such thing as an identical case. That means that the decision-making would be left to the AI itself. I s-see… No, wrong. Huh? The AI is infallible. Let’s get that out of the way first. Which item will you forget?
– we don’t have to turn it on – lawless land
– the responsibility of those who make the laws – don’t remember this Forgot “the responsibility of those who make the laws”
Remembered: “the AI is infallible” Putting aside that it’s a theory for now, if the AI had emotional capabilities,
wouldn’t that make it a perfect candidate? It’s under that assumption that we’re having this argument, no? Yes. What the? Where did this come from all of a sudden? If the machine can anticipate how a human would react,
how is that different from it feeling? How will you respond?
– we don’t have to turn it on
– lawless land – the AI is infallible
– let it pass What else is she thinking…
Let’s see where this goes. Uh, I mean sure… That’s entirely unprincipled. But, it sticks out to me that I agree having heard the explanation! …I do feel like the Agree Side is more in the right here. How will you respond?
→ the responsibility of those who make the law But an AI has no humanity. Therefore it can’t take responsibility. What do you mean? There’s no way that a machine can understand human emotion. If this AI made all of its decisions based on past cases,
there’d definitely be a problem. And that’s because the law itself has flaws. …Tiger-san. Huh? Uh, yeah? Let’s say, hypothetically… that because of a mistake made by the AI,
you lost someone close to you. But that wouldn’t be a murder, it’d be manslaughter.
Since an AI isn’t human, it can’t take responsibility. The AI would never see consequences, and it would just carry on as usual. You said so yourself, didn’t you?
If the people who write laws aren’t perfect in their wording,
they have to take responsibility for that. …Yeah, that’s true, huh.
I guess I can’t trust the AI 100% The minute you take something on,
you also take on responsibility for how it turns out… Whew. Got control of the argument. I think I can push through if things keep going this way. Do you have any rebuttal, Agree Side? *No, not really.* …Is that so? Her first lie, huh. Is she hiding something?
If so, why not argue back now? Oh my god, is it because she thinks arguing is too much trouble? No, I don’t think she’d go quite that far. At the very least, I shouldn’t let my guard down. I wonder, do I…? …What are you hiding? Wait, we all agree, but we’re still going? It’s because we just started.
We might see an upset here. What? How? Yeah, it’s pretty one-sided at the moment… I mean, we’ve only really been discussing the Anti Side so far. I have to admit, this has been disappointingly easy. Could I really end things right here? Hehe… Monkey-san? You’re way off base, probably. No, I think it’s more that we have no idea what you mean. Why would you even want an AI to take responsibility? I knew it. She was just pretending not to have a comeback in the last round. I can see through a lie, but I can’t do anything if she just won’t talk. Room for responsibility, huh. Remembered: “responsibility” No, even if you’re personally satisfied… The AI’s just a tool. It’s not a human being. Who would ever demand that a tool be held accountable for anything? If someone is stabbed with a kitchen knife, is it the knife’s fault? Oh, t-true… That’s a…weird example… She’s got a point.
I thought of that, but I never imagined it’d be used against me… B-But by that logic, then no one is responsible for any mistakes. Do you really think that?
You do realize, don’t you, where the fault really lies? …Of course I do.
In this case, there’s a person who’s at fault.
But… Would it be the AI’s inventor?
Since it made a mistake? No. Huh? No, it would be on us, since we decided to turn the AI on. Exactly. So you do get it. Remembered: “did get it” The AI is only meting out punishment according to the law.
What if the law itself is wrong? No, that doesn’t hold up. The laws are imperfect, and everyone knows that, so that can be utilized. So ultimately, that’s our fault for consenting to abide by something imperfect. I see. That’s satisfactory. How come you’re giving the Agree Side fodder when you’re on the Anti Side? If I try to deny this now, it’ll have the opposite effect. They told us at the start that we don’t have to turn the AI on. Back during the little drama part?
I didn’t really pay much attention to that… I remember it. If we choose not to turn on the AI,
then the experiment is moot.
I thought that was weird. There was a reason. We don’t need to turn on the AI.
In other words, if we choose to turn it on,
we forfeit our right to not turn it on. To put it another way, turning on the AI is very much our own decision. That means that if anything goes wrong with the AI, it’s our fault. I see… So it’s significant that we turn it on with our own hands. We’re responsible, huh… But no matter what happens, someone has to take the fall. We’re in a situation where it’s intentionally difficult to survive,
so in the end we’ll have to do something to protect ourselves. Remembered: “situation designed to be difficult to live in” Maybe it’d be different if we were in a sci-fi movie where the AI had a heart Which thing will you forget?
→ you did get it Forgot “you did get it”
Remembered: “sci fi movie” At the very end of the first phase,
Yurizono Shihori suddenly butted in. So there’s no way an AI could betray you. She said earlier that if a machine could mimic emotional thinking,
it would be the same as it actually feeling emotions. Just like I did with the talk about responsibility,
there’s no doubt she’s trying to drive the conversation. There is culpability, it’s just not on the AI. How will you respond?
– responsibility
– sci-fi movie → situation designed to be difficult to live in
– let it go I think that if you were too mechanical about it,
that would make it difficult to live with. If we make laws ourselves, we’ll be better able to
adapt to them. So uh, how exactly do you adjust to a law? Huh? Exactly. If there isn’t a strong foundation,
that will make everyone unsure of what to believe. I…yeah, you’re right. I guess that was a little out of left field… My rebuttal timing was off.
I should wait to hear what my opponent has to say first. (Let’s try responsibility this time) In other words, you’re saying if we don’t turn on the AI,
we never have to shoulder any responsibility? Sure. Of course, you can consider that an option. So you admit that turning on the AI is nothing but a risk? …I don’t think that’s quite it. Huh? Dog-kun, you said that if no one was going to take responsibility,
that was a reason to not turn on the AI. And then Monkey-san answered by pointing out that there would be culpability. Of course, if there was a way to protect ourselves
without being at fault, we would take that. And if such a dream machine existed, then there’d be no reason not to use it. But if us being responsible is on the table,
then we need to aim for a society without crime. I believe that that’s beneficial enough. Exactly. I feel the same way. Shit… (OKAY. LET’S TRY “So you did know”) You said it yourself. That I did understand. About what now? Exactly. I did get it. I just didn’t say anything at the time. Yeah? So…what? I get the feeling that I jumped in to rebutt before understanding
what her stance was… There are lots of sci fi movies where humans leave everything up to AI,
don’t you think? Yes, and? Things get switched around.
The humans become the tools of the machine. Obviously, that’s not an ideal society. I feel like you’re really taking this and running with it… Do you not understand that we’re talking about the implications
of utilizing an AI to protect us? Let’s try hearing Yurizono Shihori out a bit more. Maybe I’ll find a hole in her logic. All I want is an easy life. What’s with that sudden proclamation? But I know that there are as many people who want easy living
as there are stars in the sky. A life without suffering. Obviously, production and consumption are proportional. Forgot “you did get it”
Remembered: “production and consumption” So if you want to be a consumer, it means someone somewhere has to suffer. Do you know what it is that prevents us from having easy lives? How will you respond?
→ production and consumption It’s supply and demand, isn’t it? Since that’s the way of the world, that’s what’s making you suffer. You can’t have one without the other. No, that’s not it. It’s true… Without production or consumption, we wouldn’t be able to live… …I suppose that’s true. Dammit. I’ll have to think a bit more before I speak…. Yurizono Shihori must have said something earlier. → Responsibility Is it…responsibility? Yes. Responsibility and vanity. These two things will ruin a person. Especially responsibility, given that it’s so often pushed on you
as a side effect of having rights. You can’t do anything about it. So do you give up and accept it? To some extent. That’s why I always try to take on the least amount of responsibility. True. It’s a balance between benefits and the responsibility that comes with them. The AI is an unknown factor,
so accepting it sight unseen is a huge risk. However, in exchange for that risk, we could
get an AI to mete out impartial judgement. If you’re looking for rationality,
then this is the best system to protect law-abiding citizens. And even better, we’d be looked after for the rest of our lives for running it. The AI may give us fair punishment,
but that doesn’t necessarily mean it will protect us. Of course I understand that,
but I’ve never done anything to garner hate from anyone. Nor do I intend to.
I just want to keep living by myself, unbothered. Even if I were put in charge of looking after the AI…
it’s not like there’s anyone who would want to kill me. …That you know of. May I say something? What is it? That’s very much your personal opinion.
Not everyone may want a world with no interference. Naturally. I’m just saying what I would do.
I didn’t mean to force it on anyone else. It’s how I’d uphold the law and build fair relationships. What does that mean? It may seem you get along on the surface,
but in reality the other person is desperate to escape.
You’d see through that charade. Of course, it’s possible to betray someone without breaking laws. But at the very least, they couldn’t break the law. Hmm, I guess that makes sense. You never know what another person is thinking.
Don’t you agree? I do not get this woman at all. Well, I can’t really think of a reason to argue with you… So…have we decided? Game, set, match? …There’s no way. She’s right about it being fair. …and yet. A rational, fair world.
That’s not necessarily an equal, happy world. Instead, if that’s how the world was, people would
go out of their way to avoid each other, and it would
be a cold, isolated world. Without human interaction, you lose your humanity. I do think that Yurizono Shihori believes this is right,
but that’s a world where social interaction dies. In other words, she’s arguing to
supplement something that benefits her… I don’t have any objection to trying to be fair. …But. (At this point Kaname repeats himself so I’m not putting in subs for that again, sorry) There’s nothing wrong about that way of thinking, but…
there’s still something not quite right. We’re still doing this? I still haven’t heard everything about your proposal. *We all understand each other now.* She sure knows how to not read a room when it suits her… I want to ask about other benefits to leaving everything to the AI. The world we live in now is rife with problems. Remembered: “the world we live in now is rife with problems” Even under the law, which is supposed to be impartial,
there are contradictions and issues. Witnessless murders, bullying in schools… Remembered: “witnessless murders” The police’s job is to keep the peace,
but in reality it’s so hard to do that in so many cases. Remembered: “keeping the peace” This AI can do that. In other words, to reject this AI is to reject
the police and the government. How will you respond?
→ witnessless murders But in the case of a murder without any witnesses,
you can never resolve that no matter how rational you are. To solve that, you’d have to keep watch on the entire world. Do you think that’s even possible? I can’t say… It isn’t. And despite that,
the police still aim for a crimeless world. Ah true, you could look at it that way too. I understand that. So? Well… …What? → This world is rife with problems. No matter how rational you are in your approach,
you’ll never eliminate every social issue. Why not? It’s just not physically possible to solve every problem. Can you give me an example? For instance… …for instance, what? Can you not think of one? And so what if we can’t solve every single social issue? What was it I wanted to say?
I don’t even remember… Maybe I should go for the contradiction in keeping the peace. (Fuck it, try the keeping the peace one) Let’s actually talk about that.
Modern day police and governments have huge contradictions
when it comes to keeping the peace. Big contradictions? What do you mean? Inequality, inaccuracy, crimes that are never taken to court… You may aim to get rid of all of that,
but society accounts for that being impossible. …Huh? …What? I don’t really understand what you’re trying to say. Maybe that it’s better to not be on watch at all times? Exactly. Yeah, I don’t follow at all. Please explain so I can get it! In other words, if we turn on the AI,
we’ll know when someone somewhere does something bad. At that point, we can’t overlook it.
I guess I’m trying to say that a situation like that would be tough. How? Isn’t that a good thing? You’d think so, wouldn’t you?
But we all have the right to overlook things. What’s that? Yo, don’t just have a convo between the two of ya.
By the way, what the fuck’s up with how yer talkin’? It’s got nothing to do with you. Tch… Whatever. Anyway, if we lose the ability to overlook things,
that negatively impacts our lives. Got an example? For instance, say you ate some of your friend’s snacks. That’s technically theft. Theft is bad. It’s a crime. If it weren’t settled out of course, then… Ah, I get it.
The AI would punish you. Precisely. Non-malicious, small infractions would gradually become
treated as more serious crimes. When you put it like that…it’s a little scary. If we turned on the AI without taking that kind of thing into account… Is that what you want?
To end life as we know it? I’m sure it would feel strange at the start,
but it would be better than the vagueness of our current society. If there weren’t any vagueness,
then people would get used to that kind of society and not
feel strange about it anymore. You think so? It’s just like evolution in biology. The ones who survive
are those who are best suited to the environment. It’s not just that there’s vague points in our society by accident.
They should be there. And there’s more. Such as? The next problem is that if we allow the AI
total control, we can’t amend the system. Self-reflection is a way to bring out the best in yourself. Our conscience and logic can never grow stronger,
and so we just ignore it. That’s no way to improve. We can also think about things that worry us.
When we reach a conclusion, that’s satisfying, right? You aren’t wrong.
I agree that’s very important. Does the AI have room for revised thought patterns and logic? Being constantly monitored brings not only fear,
but also robs us of the ability to think. If someone commits a crime and never gets to repent,
they just are punished and then forgiven, that’s
nothing more than a mechanical process. The only kind of society we could get from this AI
is a lukewarm, utilitarian one. People would stop interacting for fear of running afoul of the law. That’s a world where human connection is dead.
Is that a world you want to live in? …What if I do? Then I would have to admit that I do not understand you. And what’s wrong with that? Still, I’m working hard to understand. Can you say the same? I… There’s that strange sound mixed in again. Concentrate. What is her heart trying to say?
– I don’t want to engage with people, regardless of the reason
– Whatever. Hurry up and kill me. – I can’t understand other people. ← I don’t understand other people at all. And what’s wrong with that? What you can’t do isn’t the issue.
It’s what you won’t. Is there a point in trying, if I know I can’t do it? There is.
Well… I believe there is. …Yeah, I agree. Perhaps this is idealistic, but I don’t think we should lose that. Yeah…good point. Working hard. Believing things.
Why are you all so content with such vague answers? Is that really the right way to think?
Isn’t there a chance you’re wrong? Of course…I can’t say that it’s right.
But there are people who back up my opinion. So I suppose you could say that we understand each other. No. There’s no way you understand each other.
We need a way of making decisions without having
to understand each other… …like the AI, you mean? Exactly. …huh, so that’s what ya wanted to say in the end?
I’da thought you’d have a better reason. I’m pretty lost a lot of the time too.
I…understand wanting to delegate decision-making to someone else. But even so… I don’t think
I could ever see things the way you do. Monkey-san. …What? You don’t know what other people are thinking.
You might get betrayed, and that worries you. But that’s exactly why you need to try and get to know others.
Otherwise your mood swings wildly based on what they say. Don’t you think that’s important? Sometimes you hurt someone; sometimes they hurt you.
Despite that, you keep progressing. I think that’s what deep down makes us human. I don’t know who you are, but I like what you’re saying. Me too. I like the world how it is. I want to live in a world where even if I fight with my friends
sometimes, we can still make up afterward. Misa… I…really don’t get it. But, I never understand anything, so I guess that’s that.