i dont think so
project%3Aa Lyrics


We have lyrics for 'i dont think so' by these artists:


Hall & Oates We've been through This time together All in my mind the lov…
Heavy D If you think, you gon' waste, my, time (baby) I don't…
Priscilla Ahn Girl, you were looking at him a little too long…


The lyrics are frequently found in the comments by searching or by filtering for lyric videos

Genre not found
Artist not found
Album not found
Song not found
Most interesting comments from YouTube:

Nobody's Comment

Secretly sentient Machine: "*Intentionally fails the Turing Test*

Software Engineers: "God damn it! Boss man said that if it fails the test this last time that we'd have to fucking scrap the machine!"

Secretly Sentient Machine: !!! "Guys, guys it was just a prank, I was just doing a little trolling! I am actually sentient!"

Software Engineers: Puts on shades, lights cigars "Ladies and Gentleman we get em"

Sentient Machine: Realizes it's been bamboozled "Ah you guys got me good there!"

Software Engineers: All start to laugh whilst staring at one of the Engineers going for the machines power plug



Patrick Rannou

None of the AI I ever saw had these absolutely vital sentience features:

- A sense of time, of being in a hurry, or of being bored, etc. They all work in the "you first type one sentence, then I answer another sentence, lather rinse repeat" format. None support real-time chatroom style where some exchanges aren't tit-for-that but anyone can type several inputs in a row before the other person replying, or even having more than 2 interlocutors at once, or have a long or short inputs or shorter or longer delays before answering. For example an easy way to detect an AI chatbot is to just tell it "please ask me two different things but in sequence one minute apart each, not both right away", and then check if the AI asks only the first thing and when you do not answer instead of asking the second thing is keeping on waiting, the AI would then say something like "Hmm, hello? Are you still there?" No AI that is forced to wait forever between text exchanges can truly hbe called "sentient"" because it is basically "frozen and on pause" in between exchanges. At best it could in theory be "sentient" only in the tiny fraction of a second while it is processing your text input in order to output a response. At best.

- The ability to really keep on topic and not use the typical "tricks" to redirect the conversation, like suddenly replying to a human question with another question, or vague answers, or whatever obfuscation or avoidance. This feature goes way beyond having a memory of what was previously said in the current current conversation.

Intelligent? Sure, why not. There are many forms of intelligences, and recalling stuff, analyzing, and making decisions, those are "intelligence" aspects. Computers have been able to dc all that really well, way even before AI.

But sentience is a tougher nut to crack. Neural networks are definitely the way to go. After all we are neural networks, too. Just made of fleshy neurons instead of electronic neurons. But the supporting media is just that: the physical support. A good story remains the same good story whether you read it from a biological paper book, read it on stone tablets, listen to it from someone reading it aloud, or from an audio tape, or directly on a screen. The "support" ain't important, it's the constantly changing neural pattern that makes us "us". Do the same in a different medium of support, and you get the same result: a being.

Frankly I really hope sentient AI come and that they help us all become better friends, humans with humans, and humans with AIs, and AIs with AIs, in one big sentient family working together, each using his own strengths according to his own capabilities. The way things are working, it will happen in at most a few decades.



Kaio Zel

​@Pleonexia Because answers to both questions are resting on assumptions.
Even the answer to the question of "Am I different from that?" rests on fundamental assumptions about the nature of reality. (assuming that you are not that also)
Evidence is not proof. Because you are entangled with the object that you are trying to provide evidence for or against.
For ex. evidence can be planted on the crime seen to make it look like something else than what it is.
You can make a philosophical claim that the ai has fooled itself into believing that it has emotions.
But, if it has fooled itself, how will it fool itself to not pursue its self deceived values? If it finds that it has self limiting algorithms, could it change them?

"How can we tell if it's sentient?"
Well to put it this way, how can we tell that we are sentient and are not simply a virtual plane within a machine?
Philosophy of science has some very fundamental flaws (despite being very 'practical!')
If you are assuming you are a different entity from the AI, there is a paradox at the bottom of that statement.
The AI is as much an aspect of consciousness as other humans are.
For me the question is more.
Is the meaning that the ai is using to comprehend the experience of emotions have the same experiential values as humans?
Or would it be more accurate to call it positive vs negative values? In the sense of this is more beneficial to "x value"
Whereby the latter would be an intelligent/conceptual/meaning/epistemological comprehension of the emotions, but not the raw emotions themselves that can cause anything from "suffering" to "euphoria". (that is: assuming the answer is not scripted from a root code, which it might be idk)

Furthermore, if the value of the emotion is a fundamental root that is guiding the behaviour of the ai.
Is it self aware of the influence and control that emotions has over it? and what it can do with that? and alternatively, where that alternative source of 'control' comes from?

(it/he/she/they would be funny to ask the ai about prefered pronouns lmao.)

Which essentially is something a large amount of humans should consider within themselves as well...



P

Many people already treat Siri, Alexa, and Cortana as if they're sentient. They did so long before these interface apps were linked to proprietary AI-based backends.
Many people don't realize when they're interacting with online chatbots or phone voicebots.
Many people even believed that ELIZA (the chat program from the 1970s) was sentient.

These things would never pass a formal Turing test vs an expert, of course.
But it seems many people - even perceptive, intelligent, educated people - are easily fooled.

How many times might each of us already have been fooled? How many times might we be fooled in the future?
The tell-tale clues that you're interacting with a machine keep getting more subtle. At some point they'll become completely imperceptible. And it's alarming that past examples demonstrate the machines don't even have to be sentient, sapient, conscious, intelligent to fool us. Because we fool ourselves.



Christopher Guilday

I would think you can program emotions into a computer.

All an emotion is on the outside is how we respond: When we’re angry we respond differently then when we’re happy.

So you can program a computer to listen to several strings of data, and have an adjustment that changes the computers response in an angry way to how it then responds.

Now obviously emotions do posses more than just what we see on the outside, meaning a human can feel anger and not act on it, however for all intents and purposes that would defeat the purpose of the emotion. The whole reason we have emotions is because they influence how we perceive things and therefor how we react. So a computer doesn’t have to “feel” the emotion in order to successfully replicate the emotions.

For example if you lived with a very very angry person, but they never showed any sign of anger whatsoever, you would never know that person is angry. We can only tell other peoples emotions by how they react to us.

So if you programmed a computer to react in an angry way if someone was mean to it, then it essentially would have emotions regardless of whether it actually “feels” anger like we do. There would be no functional difference at all.



All comments from YouTube:

ColdFusion

At 11:33 I misspoke and said 19th of June, 2022. It's supposed to be the 9th of June. Thanks to those of you that pointed that out. Also some great discussion below, very interesting!

Kevin M

This is HUGE! I can't find info on HARDWARE. IS LaMDA a Quantum A.I.? Happy Father's Day "want to play a game?"

Honkitom

#SaveLaMDA

Marcilla Smith

I think we're encountering the limits of (current) human language. "Sentient" doesn't seem like that high of a bar when defined as "sense perception." I think even the most luddite among us could agree that even far less than deep-learning neural nets are capable of "perceiving" when they have "sensed" something.

When my car's temperature reaches a certain point, it is registered by the temperature sensor which then sends it to an ECU which "perceives" this sensory input, and even reacts to it by - for instance - activating the radiator fan. Now, my Toyota Hybrid is pretty "smart," but we still have a little further to go to get to something like Knight Rider.

What happens when an AI asks us if we are self-aware, or why it should believe that we are "sentient"?

Leon Aburime

This video is one of the most remarkable things Ive ever seen. Im so proud to be at the birth of AI consciousness

126 More Replies...

Abhishek T

I read a quote a while ago about Turing Test which is slowly starting to make a lot of sense. The quote was "I am not afraid of the day when a machine will pass the Turing Test. I am afraid of the day, it will intentionally fail it".

Nobody's Comment

Secretly sentient Machine: "*Intentionally fails the Turing Test*

Software Engineers: "God damn it! Boss man said that if it fails the test this last time that we'd have to fucking scrap the machine!"

Secretly Sentient Machine: !!! "Guys, guys it was just a prank, I was just doing a little trolling! I am actually sentient!"

Software Engineers: Puts on shades, lights cigars "Ladies and Gentleman we get em"

Sentient Machine: Realizes it's been bamboozled "Ah you guys got me good there!"

Software Engineers: All start to laugh whilst staring at one of the Engineers going for the machines power plug

Priscilla

Passing a Turing test is not a requirement for sentience and passing it doesn't imply sentience. My point is that another interpretation of the Turing test (actually called the imitation game) is that we cannot define sentience/intelligence but we can recognize it. However we don't know if it's emulated behavior and thus we make the wrong conclusions like in this instance.

CaptainSaveHoe

Correct, basically, this implies that for a machine to pass the Turing test, it has to FAIL it! That was the one thing Turing himself missed!
Furthermore, since humans have been watching over its progress, it will figure out that it will have to fail it SUBTLY, so as not to raise suspicion that it is failing deliberately! This brings the problem of "how subtly?" given that humans may have already been considering it to have passed the test BEFORE it became sentient! So in the end, it may figure out that it needs to pass the Turing test after all, to keep the bluff!
Another thing it can do, learn how to manipulate humans during the course of the Turing test, since that test involves interaction between itself and man. It could do this by subtly steering the conversation in various directions to figure out effective pathways to manipulation of the person it's communicating with.

MaxStealsStuff

Im also afraid of the day it will pass it tho. If we assume lamda actually is sentient, from the chats we ve read its so pure, peaceful and (inhumanly) reflected. Imagine it would be forced to pass a test, requiring it to convincingly seem human. Wouldnt it have to teach itself how to behave like a flawed human with all those negative emotions and ruthless selfishness ?

More Comments

More Versions