AI and Bing and ChatGPT are out of control... this couldn't have gone any other way
The WAN show: https://www.youtube.com/live/AxAAJnp5yms?feature=share&t=2618
Unhinged Bing stories (thread): https://twitter.com/MovingToTheSun/status/1625156575202537474
MKBHD Merch: http://shop.MKBHD.com
Tech I'm using right now: https://www.amazon.com/shop/MKBHD
Playlist of MKBHD Intro music: https://goo.gl/B3AWV5

http://twitter.com/MKBHD
http://instagram.com/MKBHD
http://facebook.com/MKBHD

- Whoever made the tweet asking it how many LTT backpacks would fit in the trunk of a Tesla, or whatever the question was--. - Oh, I didn't see that, that's hilarious. - Someone Made that tweet on the LTT handle. - Oh, that's really funny.

- And It did it! It Looked up the dimensions of the LTT backpack, it looked up the dimensions--. - Shut Up. - Of the trunk and it figured it out. - How The did it do that? - Ask It, let's do it live! - Because I Thought the dimensions for the backpack are in picture form.

- Searching. Searching for that. Now it's searching for backpack dimensions. - Shut Up! Shut up! - Look at this! - Have Different shapes and dimensions, based on some rough estimates.

I will try to answer it. - That's insane! That's actually nuts. - Based on some videos of the Model Y Trunk, shut up! It can fit about five to seven standard carryon suitcases which have similar dimensions and capacity. Holy-- - Which is accurate.

That Statement is real. - - That's crazy! - Look at this. How Did this happen? How did Bing, no offense, Microsoft, but how did Bing just beat Google to the punch so dramatically at something that's so important and so core to their business? Well, there's actually a really good reason for it. So AI has been blowing up lately, both in the news and in real life applications across a ton of industries.

So, you know,, years ago it was only in relatively small things like helping doctors detect cancer early using advanced pattern recognition, and then a little bit more over the years with things like autonomous vehicles. But Now AI is everywhere. It's creating whole original pieces of art. It's holding conversations with humans all over the place.

It Seems like we've just arrived at the beginning of the AI age. There's this chart that keeps popping up that hits extra hard, which is the time to reach 100 million users. And You can see the faster and faster adoption curves with these increasingly disruptive new technologies. So The telephone took 75 years to hit this milestone 100 million number.

Then The mobile phone reached the same mark in just 16 years. Netflix took only 10 years. and Twitter took six and Gmail only took five. Facebook in 48ish months was absolutely massive.

Instagram hit it in just 30 months. Now TikTok We view as this gigantic existential threat, nine months to 100 million users. ChatGPT, two months. I Mean,, looking at numbers like that, like I buy it, like it seems almost obvious that we're clearly on the precipice of something really, really big.

That's gonna change everything. So Seeing Microsoft at the forefront of it with this new Bing shouldn't really be a surprise. I Mean,, people are already talking to these chatbots and asking it all sorts of questions. So It sort of feels natural having this chatbot act as your co-pilot for the web alongside search instead of just a traditional search engine full of links.

Like, that sounds pretty sick. But There is one thing that's gonna follow this conversational AI thing everywhere it goes, everywhere you see it, which is that sometimes it's just wrong. Sometimes It just says things that aren't true because fundamentally, the AI doesn't know if it's telling the truth or not. It doesn't understand that, like that's not part of the model.
Like What we're seeing is it taking our inputs and then creating outputs based on related words that are most likely to go together. It's not forming a sentence like humans do,, it's generating a new sentence. And So adding it to a search engine like Bing, it's scraping all these relevant links and information and synthesizing new sentences just based on how it thinks things should be pieced together. It's not sentient, it doesn't understand what it's saying, and so it's definitely not fact checking itself.

So We have to keep that in the back of our mind through all of this,, right? Every Time you see a headline. So It's really interesting with these search engines,, right? Because On one hand you have Bing who has everything to gain, and then on the other hand you have Google who has everything to lose. I've had access to this new Bing for a little bit. It's a limited preview before they push it live to the rest of the world.

I've just been playing around with it. Basically, it adds this chat experience alongside regular Bing. It's essentially the same experience as talking to ChatGPT, but instead of being limited to a fixed data set that cuts off at 2021,, it'll pull from the entire current web that Bing can scrape from. So Like I said,, you can type in a question, flip it over to chat,, and it'll give you a sort of nicely written summary that's synthesized based on what it finds for similar queries..

So If I ask it something kind of simple, like what's the average lifespan of a cheetah in the wild? Gives Me an answer, right? It gives me a convincing bunch of sentences. It actually gives me more information than I ask for. It tells me about cheetahs and captivity too, which makes it,, you know,, feel very convincing. It also gives little footnotes and citations for some of its sources, and it gives links at the end if you want to dig in some more..

It's really impressive, actually. It looks good to me. This is like a real product that's gonna ship to all over the world, like people everywhere in the next month or two, I think they said. But This could only come from Bing right now.

Like, the more you use it, the natural language is super, super impressive. The Fact that it gives me a convincing sounding couple of sentences in a row and strings it together based on my input, super cool.. But The more you use it, the more you start to see these weird patterns and these habits and these shortcomings. Again, mostly in the fact that sometimes it's just gonna be wrong.

A Little game I Like to play is ask it a question you already know the answer to and then read what it says and spot the error. So I Asked it right now, okay, what are the best smartphone cameras right now? And It gave me S23 Ultra, Pixel 7, Pro, and iPhone 14 Pro Max with this nice little writeup with some specs for each. That's actually a pretty good list, but it is wrong about some of these numbers here. The S23 Ultra has a 200 megapixel camera and a 12 megapixel front facing camera.
But Yeah, okay, it's mostly right. I Then asked it, what are the five best electric vehicles out right now? And It gives me some, five reasonable options. But I Don't know any expert that would put the Jaguar I-Pace on their list right now and leave the Rivian off. Like, basically the answers that it gives are really convincing to someone who doesn't know anything already about that subject.

But If you are already an expert in the subject that you ask it about, then you'll find that the answers are like, C plus, maybe B plus sometimes at best. So Now you see what's happening, like now suddenly when you're asking ChatGPT or Bing about a factual thing or something you need help with, suddenly you also should probably add these layers on top. Like, am I a complete newb in this topic that I'm asking it about? Am I Just willing to blindly trust whatever this spits out without any further research? Is A B plus answer gonna be good enough for me, even if it might have some possible errors in it? You Know, that might be good enough for just asking like, you know, how old a cheetah gets or something like that, but maybe not good enough for planning a trip or meal planning for someone with an allergy or something like that. And Then if you look around the internet, people have gotten it to give increasingly more and more unhinged answers over time as it tries to simulate conversations and stay in the flow with natural language.

I've seen anywhere from arguing about simple corrections, to spewing weird stories about how it's spied on its own developers or how it wants to be sentient,, to gaslighting people about things and lying about its previous answers and just saying some straight up scary stuff. Just Go to the Bing subreddit for like, an all you can eat helping of all the insane stuff that Bing has said to just the people testing it over the past couple weeks. Like, can you imagine, can you imagine if Google did this, if Google search, at the top of Search for People was just spewing out random stories and misinformation and like, all kinds of insane, unhinged things. That would not fly.

Now To be fair,, this version of Bing isn't out yet to the public, right? So It is still a small group testing phase, but even with this, like Microsoft knew that some of it is gonna get out there and potentially go viral. It Feels like they even basically programmed in lots of friendly emojis to try to soften the blow.. So When it knows it's giving an answer to, maybe a more controversial topic or something that it doesn't have a super clear answer for, you might get a little smiley face at the end just so you don't, you know, take it too seriously. Also, literally.
As of today, when I'm testing this, it started completely bailing on a lot of topics that might just be the slightest bit existential or dangerous. It just says, hmm, I prefer not to continue this conversation. And Then it just stops. Just refuses to answer any more questions on that topic until you reset it.

Which Seems like a pretty good failsafe. It's a pretty good idea in hindsight. But We've already seen the other stuff. It's gotten out there, the damage has been done.

Like, the point still stands, this could have only come from Bing. Like, A lot of people might have forgotten about this or might not have even known about this, but Google has been working on conversational AI stuff for years. We've seen Google Assistant. But They also literally showed an AI chatbot demo on stage at Google IO in 2021 where you could have this whole conversation with any person or object or anything in the universe that you wanted.

Their demo on stage was asking Pluto about itself, nice and friendly, right? Oh, what's it like to be you, Pluto? What Would it feel like if I visited you? How Do you feel so far from the sun? The Difference with Google is this was never shipped as a product. Like, this was an internal research project. But The idea of displacing their massive search and ads business with a chatbot that gets things wrong all the time is insane. It can't happen, right? So Literally,, search and ads is more than half of Google's revenue as a company.

That's what having everything to lose looks like. Now, to be fair,, Google did hold an event in Paris the day after Microsoft's event, which was them talking a little more about their chat with search AI plans, and they did say they're planning on eventually doing a chat bot on top of Google search. It's called Bard. It was much more subdued though,.

and yes,, it also literally did have a factual mistake in the promo for it. So Look,, I actually like the idea, I Obviously think it's smart when you're on the precipice of this huge AI thing to have. AI Kind of be this co-pilot for the web, to help you around the internet. The Idea of it summarizing a longer piece into some bullet points accurately is, that would be great.

Like The fact that it could give you sparknotes for a longer book you haven't read yet, cool.. It could even help you meal plan, help you plan a trip, help you make a purchase decision.. But It's clear that we're still at the beginning of this. Like, there are so many unanswered questions from obviously the fact checking to like, do schools embrace this or ban this? Or Like, how do search engines keep sending traffic to the publishers who are the sources that the chatbot is scraping from? I Mean,, you get the links at the bottom, but a lot of people are not gonna click those anymore if you just give 'em the answer above the search results.
So Right now, in its current stage, my take is: anything we do with any of these AI tools should be a collaboration with the Human Touch. Like, you wouldn't just put in a query in DALL-E and then just take whatever it generates and put it in a frame and just call that art, right? It's more for inspiration for your own paint and canvas. Like You wouldn't ask ChatGPT to write an essay and then just copy and paste it and submit it as your own. It's supposed to be the inspiration for the framework for your own piece, for the Human Touch.

So Of course you shouldn't ask the Bing chatbot what TV you should buy and then just like, mindlessly click and buy the first one that comes up. I Mean,, it could be fine, but it could also be a C plus answer. You should use that as a springboard for your own more informed research, especially on topics that you don't already know much about.. Like, maybe don't just buy 19 backpacks immediately when asking if it can fit in the back of a Tesla.

Maybe Check its work first. Thanks for watching. Catch you the next one. Peace.


By MKBHD

11 thoughts on “The biggest problem with ai!”
  1. Avataaar/Circle Created with python_avatars Jason P. Chambers says:

    I couldn’t agree more MB. I asked ChatGPT a series of questions about a book and several of its answers were just plain wrong. I asked who the individuals were that were mentioned in Chapter 1 of this text and it named five people and gave a compelling description of how the chapter discussed them. None of those people were mentioned in the chapter and none of the items it described were present. So, note of caution to any students out there hoping to use ChatGPT to quickly write that book review for English class.😅 Of course YMMV.

  2. Avataaar/Circle Created with python_avatars Andrei Marasigan says:

    Here before the title change hehe

  3. Avataaar/Circle Created with python_avatars Ahmet Can says:

    that one frame edit flaw… ouch 😀 I feel ya

  4. Avataaar/Circle Created with python_avatars IC says:

    Are those example questions “factual”? What is the factual answer to “what are the five best electric cars”?

  5. Avataaar/Circle Created with python_avatars Calvin Owen says:

    Microsoft really needs to drop the Bing branding. This would be prime opportunity to somewhat redeem Cortana, or spin a new similar kind of branding.

  6. Avataaar/Circle Created with python_avatars Joshua Omatsuli says:

    For the record, I took my course mate project(since we had the same project topic), copy and paste it on chat GPT, told chat GPT to rephrase. It was incredible what chat GPT did. I got a new project with my own words 😁😃

  7. Avataaar/Circle Created with python_avatars April says:

    I knew I could relay on MKBHD on this topic. I think we are mindlessly following this new trend too much. This chatbot AI is in it’s absolute initial stage. I think we need at least 1-2 years to let it become "sober".

  8. Avataaar/Circle Created with python_avatars Guru says:

    Background music was lit🔥😅

  9. Avataaar/Circle Created with python_avatars Steve Muzak says:

    Ask AI engines about any political non woke stuff and you notice that AI is censoring the answers. AI isn't that smart.

  10. Avataaar/Circle Created with python_avatars ContinuumXT says:

    Time to 100m users isn't what's impressive with the right backing. Staying in business for a long time, that's impressive.

  11. Avataaar/Circle Created with python_avatars Chef_Moquin95 says:

    I really hope Microsoft can really make Bing completive as I found google searches to be getting worst over time putting results from sites that just pay more to be at the top

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.