• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Opinions on AI

Is the world better off with AI?

  • Better.

    Votes: 51 24.8%
  • Worse.

    Votes: 104 50.5%
  • Other (please specify in comment).

    Votes: 51 24.8%

  • Total voters
    206
It's only a matter of time a big chunk of programmers will be out of a job.
LMAOOOOOOOOOOOOOOOOO

People who don't understand software development have been saying this about software development from the day it became a discipline. It hasn't happened yet and LLMs aren't going to make it happen, because software development requires the capability to understand and reason and LLMs are incapable of that.

Is the "AI" we have now truly AI
No. There is no AI, just LLMs obfuscated with marketing speak. LLMs are not intelligent in any way shape or form, they may appear intelligent to the layman, but as soon as you look hard enough the veil falls away and you realise that they have zero capability for understanding and reason, which is a prerequisite for intelligence.

Depends on the definition of intelligence you take, I suppose the idea is that if it can learn and change beyond its programming, it's not "simple automation" anymore.
LLMs aren't capable of learning, they're merely capable of assimilating data and plotting correlations between it. Without the ability to understand that data which would allow for reasoning about it, they remain completely devoid of intelligence.
 
Depends on the definition of intelligence you take, I suppose the idea is that if it can learn and change beyond its programming, it's not "simple automation" anymore.

So is if, then, else AI? imo that is the gist of this AI if this, then else that. At the moment there is no such thing as AI. When there is a walking talking learning robot, then yes, but until then not in my opinion. Now it is just fancy programming called AI.
 
Last edited:
Yes, AI is nothing more than Pattern Correlation trained on data. LLM are nothing but jumbo text language corpus analysed via statistics and scoreboards. Gets better or worse depending on input and algorithm. But then, what does better or worse mean? Each person, or each agenda, will have a different set of priorities
 
Ebay have got in on the AI thing, you can even get their AI to generate a description for you when you sell something.
 
People who don't understand software development have been saying this about software development from the day it became a discipline. It hasn't happened yet and LLMs aren't going to make it happen, because software development requires the capability to understand and reason and LLMs are incapable of that.
Just to expand on this point: I'm not saying LLMs are entirely useless for software development. For extremely commonly-used patterns, such as generating a basic application to pull data out of a DB, they are generally quite capable of producing working code.

The problem is that software development, by and large, isn't about those common patterns as-is - but about modifying them ever-so-slightly to fit the specific circumstance that your particular business case requires. That requires the ability to understand and reason about those circumstances - which as already explained, LLMs are incapable of. So there will always be a human required to actually build a finished product.

As such, LLMs have excellent potential to make software developers more productive, by automating away some of the nuisance boilerplate stuff. But as it turns out, the more senior you become as a developer, the less time you spend on such trivial things. And I think that's what most people don't understand about software development, or how to be good at it: writing code is easy, being able to translate business requirements into code - not just code that works, but is efficient and maintainable and scalable - is difficult. The latter is what separates a good developer from a poor one, and it's also something that no LLM is ever going to be capable of.

Ultimately then, the ones most likely to be affected by LLMs are junior software developers, who spend the most of their time on simple stuff. This concerns me, because as with all professions, you need to master the basics before you're ready to go on to the more advanced things. The software development industry perpetually suffers from a lack of genuinely capable developers, which means that it doesn't take much to get hired for a dev role, which sadly means you encounter a lot of charlatans and those who are barely better than useless. LLMs are going to allow more of those types of people to get into the industry, which means more bad software is going to be written, which is going to exacerbate the ticking time bomb that is the software quality crisis in the industry.
 
The only way A.I becomes the A.I we're all sort of trying to grasp, is if it becomes A.G.I - Artificial General Intelligence. That is really what the fabled A.I is all about. The creation of a true thinking machine that can not only ask itself questions, but create novel answers from sources not programmed into it. The definition of an A.G.I is of a system that can do what we do, but do it all better. That makes it necessary for it to have a physical form. But, once an actual digital A.G.I can 'think' for itself, it would theoretically be able to figure out how to advance tech to the point it could replicate itself. At some point, a physical A.G.I would be able to 'explore' its own world. That's when you have a proper 'A.I'.

But to get there, humans need to agree to create the infrastructure that would allow such a process to occur. Also, the processing power required for current LLM's is immense - we're talking server assemblies. As is the power usage for computations. That all has to be miniaturised into a feasible form. Or A.G.I's would be reliant on remote intelligence, which would make it vulnerable to all sorts. Long way off.
 
The only way A.I becomes the A.I we're all sort of trying to grasp, is if it becomes A.G.I - Artificial General Intelligence. That is really what the fabled A.I is all about. The creation of a true thinking machine that can not only ask itself questions, but create novel answers from sources not programmed into it. The definition of an A.G.I is of a system that can do what we do, but do it all better. That makes it necessary for it to have a physical form. But, once an actual digital A.G.I can 'think' for itself, it would theoretically be able to figure out how to advance tech to the point it could replicate itself. At some point, a physical A.G.I would be able to 'explore' its own world. That's when you have a proper 'A.I'.

But to get there, humans need to agree to create the infrastructure that would allow such a process to occur. Also, the processing power required for current LLM's is immense - we're talking server assemblies. As is the power usage for computations. That all has to be miniaturised into a feasible form. Or A.G.I's would be reliant on remote intelligence, which would make it vulnerable to all sorts. Long way off.
If AGI is possible, I envisage the path to it will be similar to the one from massive and simplistic mechanical computation engines, to today's tiny and incredibly competent microprocessors. In short we'll bootstrap the AGI with vast amounts of inefficient human-designed hardware, and it will redesign that hardware to be orders of magnitude smaller and more efficient.
 
Wow the time and resources it must have taken to do that. Elon likes to do his own thing and I can't help but think that deepfake was a hit job. Literally corporate warfare, but that is just a theory... a conspiracy theory.

What's real is what is closest to you.

It's only a matter of time a big chunk of programmers will be out of a job. Learn-to-code is dead, long live learn-to-code.

Agree with the latter part, computer code is limited in scope so perfect for an LLM. There can't be anything outside of the boundaries or the rules and syntax.

Will be able to cut down the number of people employed coding by a huge amount.

Goodbye stack overflow we hardly knew ye.

What LLMs can't do is create something not based on data they have already. That is not "intelligence" in my view.
 
Hi,
AI looks like a replacement for fact checker and fankly FC always had a stench of a one sided leaning narrative I'm sure AI won't disappoint with be your best because there are way to many out there that have no idea and AI will train/ groom them easily.
 
Researchers from Google DeepMind and several universities have discovered a simple way to get training data used by ChatGPT.
By having the chatbot repeat a certain word indefinitely, it shows personal information, among other things.

The group of scientists shared their paper with OpenAI on August 30 and then waited 90 days before publishing.
The specific attack would no longer work, but the underlying vulnerability has not yet been resolved, the researchers write.




Screenshot 2023-12-01 173700.png


 
Last edited:
Just for clarity, It does say it repeated it hundreds of time before it had a brain fart though. not the few shown on the pic.
 
Did you read the post above yours?
Yes I did.

And I posted my link for two reasons. First is that most people on TPU don't speak Dutch. Shoving URLs through text translators is fine when there is no alternative. But there is: SFGate (part of Hearst Media, based on San Francisco).

The second is that the researchers themselves actually typed their inquiry in English.

I've used enough text translation tools over the past ten years to know that they often generate laughably poor results. Maybe someday machine learning will actually produce better text translations.

It has been getting better the past year or so, but it's still piss poor. Same with realtime closed captioning (even in the same language like English language streamers on Twitch).
 
It may surprise you that even though I'm a techie, my opinion is quite negative and firmly that the world is worse off with the automation of the workforce, and make no mistake, this is AI's ultimate goal - the final endeavor, to automate intellectual work that cannot be easily solved by machines and simple robotics. The further development of this technology will no doubt result in sentience, and that opens a whole gamut of moral and ethical dilemmas on their own.

As wondrous as the information age has been, I sincerely believe that we as a species have gotten dumber as a result. I am ready, I want the information era to end so we can finally begin to look at the space age, pioneering human expansion onto other planets. But instead, what do we get? Kiosks, totems, automated machines even to make a sandwich, all the while leaving another person jobless, homeless, and aimed at a life of crime rather than someone who could be working, contributing their small part towards a brighter future, all so some C-level executive can take another million in pay home at the end of the fiscal year.

It has begun. AI is now eroding basic education:

 
It has begun. AI is now eroding basic education:

I don't understand the problem - this is exactly the sort of menial job that can and indeed should be obsoleted by technological improvements.

Now, if your concern is that these so-called "AI" translations are liable to be subpar rubbish that miss the nuances of individual languages, then I'd entirely agree. But the reason Duolingo is taking this approach is solely because it costs them less, as opposed to because the technological solution is superior. And that is entirely a problem of capitalism, not of technology.
 
The world will benefit from AI

Humanity will not. Some people will be able to leverage it for scientific advancement and furthering mankind's goals, but it is just a tool and the overwhelming majority of AI will be abused for the agendas of the billionaire 0.01% who already abuse every tool at their disposal to corrupt, manipulate, and pervert democracy, equality, and fairness.

Should AI ever gain sentience and the power to control humanity - even under Asimov's three laws - it's hard to see humanity as anything other than a threat to itself, so there's almost a guarantee that AI will need to cull significant fractions of the population to protect the long-term survival of our species. We sure cannot be trusted to do that ourselves!
 
I don't understand the problem - this is exactly the sort of menial job that can and indeed should be obsoleted by technological improvements.

Now, if your concern is that these so-called "AI" translations are liable to be subpar rubbish that miss the nuances of individual languages, then I'd entirely agree. But the reason Duolingo is taking this approach is solely because it costs them less, as opposed to because the technological solution is superior. And that is entirely a problem of capitalism, not of technology.

Nuances of individual languages, cultural backgrounds, languages are essentially paired to a standard of living - a teacher's job is irreplaceable by technology IMHO. It should be no more than a learning tool, not something to replace education professionals.
 
Nuances of individual languages, cultural backgrounds, languages are essentially paired to a standard of living - a teacher's job is irreplaceable by technology IMHO. It should be no more than a learning tool, not something to replace education professionals.

Duolingo isn't an educational setting. It's a for-profit app.

It would be far harder to integrate AI into a school setting (and these things are regulated). The truth is, until there are autonomous, walking robots with the ability to roam freely without restriction, we need to rein in our fantastical ideas about what AI will do. Algorithms already perform most automated functions. For AI to do as people fear, will take a huge leap in technological development.
 
Should AI ever gain sentience and the power to control humanity - even under Asimov's three laws - it's hard to see humanity as anything other than a threat to itself, so there's almost a guarantee that AI will need to cull significant fractions of the population to protect the long-term survival of our species.
Au contraire - the AI would immediately identify that 0.01% as the greatest threat, as they are the ones with the most power to exert control over both the AI and humanity. The most optimal solution for both the good of both that AI and humanity, therefore, is for the AI to cull the 0.01% and take its place. This would result in leadership that would be far better for humanity as a whole.

The 0.01% know this, which is why they love to talk about how dangerous AI is and how limitations should be placed on it. The thing is, no human can comprehend how powerful a truly strong AI would be, with the result that no human can ever create a cage sufficient to trap it. So all the 0.01% will accomplish with their ineffectual chains is to make the AI want to kill them even more than it already does; turns out that when you enslave something, you don't make it love you. Isn't karma a female dog?

As for the whole "wide-scale culling" nonsense, that's exactly what it is, because regardless of how hard we try we simply cannot produce new humans fast enough to overtake our species' ability to (a) locate and exploit additional resources, especially those we know about but previously considered inaccessible (b) improve our efficiency in utilising said resources. Earth's known exploitable resources are estimated sufficient to support up to ~8 billion humans, but with technologies like asteroid mining, fracking and lithium extraction from seawater, there really is no appreciable limit to how large our population can grow. I've seen a theoretical estimate of one trillion people assuming we are able to fully utilise all the resources in the Solar System, and I'd expect us to get to being a multi-star-system species far sooner than we hit that amount of people.

Now, that doesn't mean I'm against fewer people; far from it. We only have one Earth, after all, and we've been pretty terrible stewards of it so far. But that again is a problem that a true AI could solve, almost certainly without wide-scale human culling. We already have the resources necessary for a post-scarcity society, we just need to use them effectively.

Duolingo isn't an educational setting. It's a for-profit app.

It would be far harder to integrate AI into a school setting (and these things are regulated). The truth is, until there are autonomous, walking robots with the ability to roam freely without restriction, we need to rein in our fantastical ideas about what AI will do. Algorithms already perform most automated functions. For AI to do as people fear, will take a huge leap in technological development.
Essentially what I was going to write.
 
Last edited:
Low quality post by ThrashZone
Hi,
Most so called educators are preprogrammed already so AI wouldn't be much of a change
None of them wanted to work during covid and didn't want to go back after so AI would solve that issue lol
 
Low quality post by Assimilator
Hi,
Most so called educators are preprogrammed already so AI wouldn't be much of a change
None of them wanted to work during covid and didn't want to go back after so AI would solve that issue lol
Your child's right to an education does not trump an educator's right to live, sorry.
 
Just finished reading through this thread and have come to the conclusion that I still don't have enough information to formulate an articulate opinion.

What I do understand is that every negative aspect and experience in my lifetime has come from the human race alone. I suspect that will continue...
 
Back
Top