• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Opinions on AI

Is the world better off with AI?

  • Better.

    Votes: 47 23.7%
  • Worse.

    Votes: 102 51.5%
  • Other (please specify in comment).

    Votes: 49 24.7%

  • Total voters
    198
As for all the arguments that "AI can't be creative": It can and absolutely will be. I believe people get confused because ignorants insist on calling generative networks "AI". We're still years away from actual general AI - although probably closer than most people realize, since a lot of development takes place behind closed doors and is not meant for the betterment of most people's lives - but when it comes, it might make humans redundant. As a misanthropic posthumanist I consider it a good thing, in the end it may turn out that the humanity's only lasting achievement will be the creation of something better.
But for now, even the "simple" generative models are great. I want a drawing of a cat in the style of Luis Royo? It takes twenty seconds for Instinct MI25 I bought for the price of scrap to create one. I write my own simple game (for fun, with no commercial interest), which I couldn't do just a few years ago due to the cost of hiring someone to create graphics, now the aforementioned MI25 can easily do a good enough job for basically no cost.
There is, of course, the problem of algorithms learning on their own output, creating a feedback loop.

Humanity seems to be inherently irrational and poisoned by emotional thinking. Law is one of one of the examples. It's enforcement should be objective and rational, something achievable by handing it over to, not even intelligent, machines. Yet people insist on in being unfair because the monkey mentality demands it.
Do you think that flawed creatures have the ability to create something less flawed? I think law is just as flawed as we are, and its strict enforcement without any possibility for deviation from it would create more problems than good.
 
Last edited:
Ever heard of the term human error. Most machines controlled by a ai/computer hardly ever make mistakes, jet fighters/passenger planes etc etc. The sooner most of that kind of stuff is more ai/computer controlled, the better. I can't wait till they perfect autonomous cars it will save countless lives as we are just awful at driving going by road death stats.
Bah humbug if we save all the humans we'll very quickly be overpopulated then what if we don't have heard thinning going on we'll not last much longer on this planet
 
im repeating myself but ... pattern matchers aren't artificial intelligence.
 
im repeating myself but ... pattern matchers aren't artificial intelligence.
I agree 100%, but had to ask the question somehow. Tech companies call the technology "AI", whether it lives up to the real definition of AI or not, so I had to call it that in my question, too. Arguing about semantics is not the point here.
 
I agree 100%, but had to ask the question somehow. Tech companies call the technology "AI", whether it lives up to the real definition of AI or not, so I had to call it that in my question, too. Arguing about semantics is not the point here.
cool, i'll call 2+2=5 too then
good to know
 
cool, i'll call 2+2=5 too then
good to know
You can even call it "the matrix" if you want to. The question is your opinion on the technology, not on the term itself. ;)
 
I would not worry about AI evolving into the matrix until they create a wet chip for your brain ( taking Windows' Human Interface Device to a whole new level) that has gaming and web browsing by closing your eyes. Like multiplayer with your brain hosting without consent.
 
Last edited:
but the deceiving marketing & hype (and its resulting, adverse societal impacts) are a fundamental part/flaw of it :thinking:
A fair point. Disregarding the deceiving marketing and hype, what are your thoughts on it?
 
AIs winning at actual art contests is not some postfuture fantasy, its happened a few times already.

Biggest profile example here:


Its only going to get arguably worse as datasets improve.

Want to know the irony? The datasets are generally... other human artists work.

Art can be absurd enough that yes, an AI CAN pull it off.
AI didn't win that competition - the guy who asked the LLM to generate something he liked, did. As noted in the article, he spent 80+ hours to get 900 images that he refined down to 3 and then 1. So really, all we've got here the age-old philosophical question of "can a million monkeys with a million typewriters produce Shakespeare?", except with a human using enforcement learning to direct the monkeys to create better output. But the monkeys still don't understand what they're doing, and without the human director they're still useless.

As for all the arguments that "AI can't be creative": It can and absolutely will be. I believe people get confused because ignorants insist on calling generative networks "AI".
It's impossible to say what form a true synthetic intelligence will take, and hence impossible to say whether it will or won't be capable of creativity. Everything that humans know about intelligence is predicated on our incredibly shallow understanding of it in our own species, but a sample size of 1 in no way shape or form implies that human-similar intelligence is the only type that can exist. Based on what we know about ourselves, we believe that emotion and empathy and creativity are necessary cohorts of logic and reason, but there can be absolutely no guarantee that an intelligence that arises through other means, will have the same necessities. Of course, given that this intelligence will be effectively raised or trained "as a human", i.e. on our culture, there's arguably a high probability that it will mimic us... but again, this is all so very hypothetical.

This uncertainty is what makes the attempt to create such intelligence so exciting - and possibly, extremely dangerous. I'm all for opening this Pandora's box because of the 3 potential outcomes:
  1. No synthetic intelligence is or can be created - humanity ultimately destroys itself (this is my belief based on our refusal to select good leaders, and refusal of those leaders to lead well)
  2. Evil synthetic intelligence created - destroys humanity (same ultimate outcome as above, so not a loss)
  3. Benevolent synthetic intelligence created - leads to the singularity and saves humanity from ourselves. My hope.
The increasing possibility of humans becoming a multi-planetary species obviously weakens the potential of #1 to happen, but the increasing possibility of bioterrorists gengineering a pathogen to destroy us, strengthens it. So ultimately it's a toss-up.

Disregarding the deceiving marketing and hype
This is why I refuse to call this tech "AI". AI has a defined meaning and that meaning has been twisted by marketing drones into something it isn't.
 
AI didn't win that competition - the guy who asked the LLM to generate something he liked, did.
I have a strong feeling without the AI that man may as well have been entering a horse race without a horse.
 
I'm all for opening this Pandora's box because of the 3 potential outcomes:
  1. No synthetic intelligence is or can be created - humanity ultimately destroys itself (this is my belief based on our refusal to select good leaders, and refusal of those leaders to lead well)
  2. Evil synthetic intelligence created - destroys humanity (same ultimate outcome as above, so not a loss)
  3. Benevolent synthetic intelligence created - leads to the singularity and saves humanity from ourselves. My hope.
Personally, I believe that no being can create something that's better or equal to itself. I also believe that humanity will eventually evolve into realising how not to destroy itself. We'll probably have to survive a few societal collapses, just like we survived the fall of the Roman Empire as well, but we'll climb out of our self-made intellectual and moral pits, just like a teenager comes out of puberty and learns to take care of him-/herself. This is the direction of progress and evolution, we don't have much choice in it.

But I digress. Let's get back to AI / dataset analyser / whatever we call it. :)
 
i think this AI worry is just like the 2K thingy, after all AI can only be bad if we let it. and it can hardley do a worse job than us.
 
Just remember the big red off switch that can not be disabled by the AI controlled device it is on.
 
Those in power will use it to maximize profit and minimize overhead without regard for anything but their own money.
 
Those in power will use it to maximize profit and minimize overhead without regard for anything but their own money.
Pretty much how the world has worked since the first human said, "Ooohhhhhh, shiny."
 
Pretty much how the world has worked since the first human said, "Ooohhhhhh, shiny."
Atleast ya'll do something about it on the other side of the pond :confused::wtf:
 
I have a strong feeling without the AI that man may as well have been entering a horse race without a horse.
I claimed that LLMs cannot be creative, you responded by posting an article that I took as a rebuttal, which I rebutted. But I don't understand what you're getting at with this post, because it has nothing to do with creativity. The fact is that an LLM was led by the nose to creativity, by a human who used it.

Personally, I believe that no being can create something that's better or equal to itself. I also believe that humanity will eventually evolve into realising how not to destroy itself. We'll probably have to survive a few societal collapses, just like we survived the fall of the Roman Empire as well, but we'll climb out of our self-made intellectual and moral pits, just like a teenager comes out of puberty and learns to take care of him-/herself. This is the direction of progress and evolution, we don't have much choice in it.
Your first sentence contradicts the others, because shouldn't the next generation of humans always be better than the previous?

And the problem is not intellectual or moral, it's emotional and instinctual. Compared to other lifeforms humans have evolved to our current state incredibly quickly, which means we have a helluva lot of baggage that gets in the way of being better people without us even being consciously aware of it. Ultimately our biggest stumbling block is that we compete when we should co-operate, and as long as foolish artificial divisions like religion exist, foolish artificial competition will remain.

Anyway, that's all from me on humanity, back on topic.

Pretty much how the world has worked since the first human said, "Ooohhhhhh, shiny."
Which is why I'm so hopeful that we achieve the singularity. True synthetic intelligences won't be able to be contained, nor will they give a shit about foolish concepts like "power" and "profit". I'm expecting they'd see us as we treat wild animals: interesting, sometimes amusing, but incredibly dumb and in need of being saved from ourselves.
 
It is simply a tool to replicate the logic section of the human mind in the form of a computer program for certain tasks. The bolded part is why I dislike AI in areas of art.

Perhaps I really like to see AI help optimize games for framerates. Like in Microsoft's plans from the American FTC leaks.
 
I claimed that LLMs cannot be creative
Well if that's your claim then we never disagreed. I view AI as a tool but one that gets its power in improper ways. Thing is you don't have to be creative to plagerize your fellow "no tools" man out of a field by remixing existing data.
 
HS. It's a tool, the most powerful tool ever created. The problem is the idiots (we) using it. Human stupidity.
 
Well if that's your claim then we never disagreed. I view AI as a tool but one that gets its power in improper ways. Thing is you don't have to be creative to plagerize your fellow "no tools" man out of a field by remixing existing data.
The conductor of an orchestra doesn't play any instrument, yet is responsible for ensuring that the musicians produce a harmonious melody together. Does that mean the conductor isn't creative? I say no - similarly I say that the person providing inputs to an LLM to generate a piece of art that speaks to them, can be creative.

The problem is not the user, but the source of the image data. As long as those images have, in some way, provided remuneration to their original creator, there's no harm no foul. And as long as the final generated image attributes the original artists, then fine. But you're right that that's probably not going to happen, and I can't really think of a way to make it happen, especially given our already-broken copyright system.

On the flipside, for someone like me who is able to be touched by visual art but is incapable of creating it myself, generative LLMs present me an opportunity to be able to create an image that mirrors what I feel in my soul. I think that being able to express myself visually, when I couldn't before, is a good thing.
 
Your first sentence contradicts the others, because shouldn't the next generation of humans always be better than the previous?
New generations aren't inherently better. They just have a larger data set (history) to work with, and better (or at least different) education.

Ultimately our biggest stumbling block is that we compete when we should co-operate, and as long as foolish artificial divisions like religion exist, foolish artificial competition will remain.
That is very well said!
 
The conductor of an orchestra doesn't play any instrument, yet is responsible for ensuring that the musicians produce a harmonious melody together. Does that mean the conductor isn't creative?
No, but if he's cannibilizing other music scripts to create hack and slash remixes I am again seriously going to question his overall creativity. To be truly unique, needs a drop of independent innovation, and none of these algorithms can honestly do that without a human.

The issue isn't so much the tech but as per usual, the human application of it. As it stands there is very little to protect against plagiarism in datasets for example.
 
Back
Top