More like, stupid in, stupid out
Back on topic (sort of), if I may. There is much to unpack here. Unfortunately, a large language model predicts only language, not observations or logical inferences. As far as I understand, there are no mechanisms for detecting self-contradictory statements, logical fallacies, or physically impossible claims. We can regard the results as "creative or insightful," entertaining, or humorous, as we like, but giving them more authority than the least reliable training source could be dangerous.
True, but it does not claim to know more than all of us collectively. An encylopedia has authors who identify themselves and make an effort to be accurate.
Even if humans can't tell the difference, "Mother Nature" can. Companies dealing in law, medicine, engineering, finance, and other fields where failure can lead to legal liability should avoid AI or severely restrict its use.
Shit In, Shit Out
And wisely it should be FIFO, otherwise, how would "your" environment become
An eye for an eye makes us all blind. This notion that because AI is learning from web content so now we should create inaccurate and fake content to make AI worse will lead to a fake world. Perhaps we will end up with user-agent testing, similar to robots.txt
and the google bot whereby fake content will be served up if its a AI-training-data-collector-bot and the real content if the user-agent is human.
I do not really deliberately insert errors, there could be humans trying to use my examples.
.
At least ChatGPT has some sort of censorship built in, hence all these people talking about how to get around the political-correct filters of ChatGPT. For example, ChatGPT won't tell you how to break in and hot-wire a car but it will tell you how the person trying to rescue a baby can break in and hot-wire the only rescue option available: a car.
So there is some detection happening but it's far too rudimentary to be safe in any form.
AIs training set is the internet, the internet has authors, AI produces copyright-free content: AI is copyright laundering machine. Which is why AIs will never make music or films, both industries have better lawyers than the rest of us!
A friend told me that AI found a new type of antibiotic (or some break through in the medical field) - perhaps AI put in a subtle error which we don't understand? /s
But all those sci-fi horror scenarios about AI taking over the world can now take on a new relevance, especially if AI is creating medicines for us....
Each to their own, I am not judging. I am concerned what the societal outcome will be if everyone (or even just a large enough minority) decided to explicitly generate fake content. It's been bad enough with fake news.
Oh, ok.
I apologise, I thought you took my joke as advocating deliberately bad code.
But you didn't and I wasn't so all is well
No need to apologise, then I would be just saying/writing sorry all day - each to their opinions and the compromise will rule us all.
You might want to follow Corridor Crew on YouTube to see what is happening with AI generated content
Sadly, this is the usual press crapness. Specifically trained Machine Learning tools have been in use for years to do things like identify possible brain tumours in scans. But like everything else, the production of ML models and their scope have increased rapidly. The example you mention is certainly an amazing step forward but I take great exception to the phraseology used. Like so much of the recent madness around "AI", which makes it sound like the tools are about to take over the world (or at least our jobs), this is simply false marketing that takes away from the real work and results.
ML tools in this sector are used to accelerate what people have been doing - often by hand. In this case, running simulations of the effects of different chemical combinations (or RNA, not sure which one this refers to) and identifying likely candidates. This means that humans can focus on the important stuff of trying to ensure that the results don't actually kill people and shortens the development time probably by years if not decades - amazing right? But "AI" didn't do it.
Seriously, apart from the usual (and some new) tin-foil-hat wearing numpties (including many very clever people who seem to have spent too many years sniffing the results of soldering with lead solder), nobody is actually seeing any truly emergent and self-aware output from any of these Machine Learning tools. The risks are not from the tools but from the "tools" who use them without properly checking their output for reality and safety.
We've invented the digital equivalent TNT and decided to give it to everyone in the world. I'm not scared of TNT, I am scared of the local road planner who thinks it is just the thing for straightening out the roads in the neighbourhood or the teenager who things it would be a good wheeze to play chicken with a stick.
Similarly, I'm not scared of children using it to write their homework (frankly if the output isn't totally obvious, the student needs to pass anyway since they clearly understood the question well enough to get ChatGPT to write something convincing). I'm far more scared of the teacher who thinks that it can be used to check whether it wrote students homework (yup that's a thing, I shouldn't need to tell this audience about the consequences of that - hint, ChatGPT made up the answers).
Not sure we'd notice! We all make enough mistakes to totally screw up many of the answers ChatGPT gives us.
Anyway, we are now soooo far off topic, I don't think even ChatGPT could find our starting point!
True and for anyone reading this thread, stop after the first mention of SISO, after that it's all off-topic!
AI is a new tool, a hammer is a tool. A hammer can kill people, AI can kill new people. But only humans can do the killing. Basically I agree with what you say, we only differ in the direction that all this will take.
I would only argue that AI is the first technology that a) we don't fully understand how it works and b) have given it much trust to do the right thing. That combination of not completely understanding yet trusting it to give us the right answer is what leaves me with the question: what are the intentions of AI?
Now we could argue that AI has no intention since it has no real intelligence however it does have inbuilt biases from its training set (remember the controversy surrounding facebooks face recognition and people of colour - it had an inbuilt bias). So the question becomes, what is the inbuilt bias when AI designs medicine for us?
Good, bad, indifferent or none of the above?
ChatGPT producing code or text is ok because we understand the output and can verify it. Medicine is becoming difficult to verify it's long term consequences, that's why there was such a debate around Covid vaccines and long term studies which never happened. I don't mind AI giving has the nectar of the Gods, I just don't want to be an early adapter of that nectar.
Has the text been written with ChatGPT? That I think BTW will be the problem with all this AI generated text. It will become even harder to discern if something you have in front of you is written to inform to the best of knowledge or if it was generated to simply be there.
In the "old days" (I know well that this term is very subjective and means something different for everybody) you had a chance to determine if a text was rubbish by the way it was written, the use of technical terms only a subject matter expert would use correctly, general reputation of the author, the media, or the sum of all that. But today, when even my kids can generate a fresh piece about astigmatism in outer space that sounds correct, has a sound amount of buzzwords, and even presents enough correct facts that it becomes hard to spot the few false points; how will we ever be able to believe anything that will be written or said in the future? I guess it will not be that bad, but I think that will be one of the challenges.
Let's have an example... The article... Has the text been written with ChatGPT?
Flows consist of executable nodes (nodes represent algorithmic code blocks) interconnected be data links that model the data flows between nodes. Nodes can have multiple inputs and outputs so that data is cloned and shared amongst many nodes.
The typo (be instead of by) could be a hint that an actual person wrote the text, but then the article states that "nodes can have multiple inputs". Can they really have multiple inputs? Albeit this thread's topic "Multiple Inputs and Multiple Outputs" sounds like it, but the OP meant multiple consecutive incoming messages or multiple readings of the OPC UA Client node.
So, did Nick's statement become wrong?
Hence the typo: I can assure you that I wrote that text by myself but thank you for proposing that it might have been by an AI - kinda nice to hear that my text can't be differentiate from AI generated text!
I only wonder whether I should leave the typo but then I heard you can tell AI to generate typos in texts ...
Definitely, there are several examples over here. After all, each input generates an output, one can use a batch node to capture all outputs and then pass on a single stream of data.
Those examples are based on version 3.0.2 NR.
Thank you for pointing that out, I've fixed it
So when you say "multiple inputs" you mean "inputs from multiple nodes" or sources? I understood "multiple inputs" in the sense of a node having more than one input connectors.
BTW, what is this? A mind-mapping tool based on Node-Red editor? fancy
So when you say "multiple inputs" you mean "inputs from multiple nodes" or sources? I understood "multiple inputs" in the sense of a node having more than one input connectors.
Ah understand, no not multiple input connectors, multiple input streams was meant. If streams is the right word ...
BTW, what is this? A mind-mapping tool based on Node-Red editor? fancy
Yes exactly based on the editor but can also use the server, so my mind map becomes one giant program. The intention is to create a global mind-map similar to open street map, hence the domain openmindmap.org
The idea came from me looking at my mind-map (have a more complex one locally) and thinking that it would be nice to know what the white bits contained ... that's when I thought that it would be nice if a mind-map would be like a street-map ... I wrote (yes me, not chatGPT) the idea up over here.
becomes hard to spot the few false points
To play devils-advocate for a second - I think this is called "progress". The advancement of human knowledge and capability to a new level. Not saying it will be easy, it won't and it will take time to adjust. such is the way with sudden leaps forward. But I went through 2 children growing up recently and I could certainly see (this before the advent of ChatGPT) that much of the specific details they were learning were well in advance of what we learned at the same age. This, I believe is the same again. Teachers and professors, anyone marking someone else's work, will need to adjust to our ability to easily obtain more advanced information.
For now, all of the examples I've seen - at least of "prose", long-form writing - have been pretty obvious that they were written by a language model (or by someone with equally poor writing skills!) since they come out quite generic in style even when asked very detailed questions. Doubtless this will improve still further over time. But the main thing here is that we will need to adjust how we measure someone's skills and knowledge just as we had to when calculators were introduced (I was doing my first big school exams at around 16 when calculators were first allowed into those exams). The step up from log tables, slide-rules and mental math was quite something. Over time, basic math exams moved on from asking basic number crunching questions to ones about concepts which better test whether we understand the how rather than just the what.
how will we ever be able to believe anything that will be written or said in the future
In the same way you should do so now. Does it match the evidence? Does it meet real-world experience? Does it make logical sense? Does it make extra-ordinary claims without extra-ordinary evidence to back them up? ...
Where I think we will have bigger problems is understanding what is real in terms of video and audio without having additional context. This has already taken a big jump in capability. While big-budget movies have been selling unreal as real for years, now almost anyone can do it at almost no cost. Often with minimal skill.
(I was doing my first big school exams at around 16 when calculators were first allowed into those exams). The step up from log tables, slide-rules and mental math was quite something. Over time, basic math exams moved on from asking basic number crunching questions to ones about concepts which better test whether we understand the how rather than just the what.
This is a good example, so are newspapers, steam engines and every other technological leap. Just this time the advancement is being televised (to paraphrase some ancient hiphop). Perhaps the internet is making a mountain out of a molehill or maybe this is the big one - who knows.
Today I came across these two articles: workers get replaced by chatbot, same said chatbot tells people to lose weight and the internet panics. Again the truth lies somewhere in-between and tomorrow we'll all be talking about GPT5 or whatever wonder comes out of the AI world.
Meanwhile people continue to be saved by doctors and nurses doing great jobs in human built hospitals. Humans will still be packing shelves in supermarkets and people on bikes will be delivering food to other people. Birds will continue to sing, the sun will continue to shine and people will continue to fall in love. Our world will continue to turn, with or without us, with or without AI.
And no, this wasn't written by ChatGPT.
I think this is called "progress". The advancement of human knowledge and capability to a new level.
Progress, yes. A new level, perhaps. I doubt that human cognitive capacity has increased very much since we discovered fire, certainly not by an order of magnitude. Our capability has increased mainly by inventing new theories and levels of abstraction that allow us to know more while learning less -- and to forget things that are wrong or no longer needed. So far, large language models seem to have progressed mainly by aggregating more data, building more complex networks, and training more intensively. This seems very different from the way human knowledge has advanced, so I am waiting to see whether new insights or theories emerge that we can add to our understanding of the world.
(remember the controversy surrounding facebooks face recognition and people of colour - it had an inbuilt bias)
My terminology might have been out-of-order or out-of-date here, I apologise if I offended - that was not my intention, quite the opposite - I wished to point out that bias can lead to racism.
Just to clarify, here some links that talk about what I meant:
So what biases does AI have? I don't know but we'll all find out.
On the positive side, if we recognise that AI is a reflection of its training data, and that we recognise that we ('we' as a society) produced that training data, then we should accept that any biases that AI has come from us as a society, AI is only highlighting our underlying biases. Recognition is the first step to cure.