From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on ip-172-31-65-14.ec2.internal X-Spam-Level: X-Spam-Status: No, score=0.8 required=3.0 tests=BAYES_50,FREEMAIL_FROM, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Received: by 2002:a05:620a:179e:b0:742:71e6:b8d4 with SMTP id ay30-20020a05620a179e00b0074271e6b8d4mr86954qkb.6.1680832286642; Thu, 06 Apr 2023 18:51:26 -0700 (PDT) X-Received: by 2002:a25:c483:0:b0:b4c:9333:2a1 with SMTP id u125-20020a25c483000000b00b4c933302a1mr907965ybf.10.1680832286340; Thu, 06 Apr 2023 18:51:26 -0700 (PDT) Path: eternal-september.org!feeder.eternal-september.org!1.us.feeder.erje.net!feeder.erje.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Thu, 6 Apr 2023 18:51:26 -0700 (PDT) In-Reply-To: Injection-Info: google-groups.googlegroups.com; posting-host=199.102.130.59; posting-account=TXawsQoAAAB-ubldsHxSVtTxvqzM_vdF NNTP-Posting-Host: 199.102.130.59 References: <3db3c046-bbcf-497b-afd5-ac6c2b9567afn@googlegroups.com> User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: <9d97fbf3-ef0a-4c39-8d28-af6d20245af1n@googlegroups.com> Subject: Re: ChatGPT From: Ken Burtch Injection-Date: Fri, 07 Apr 2023 01:51:26 +0000 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Received-Bytes: 3212 Xref: feeder.eternal-september.org comp.lang.ada:65074 List-Id: On Saturday, April 1, 2023 at 3:39:51=E2=80=AFAM UTC-4, Dmitry A. Kazakov w= rote: > On 2023-03-31 23:44, Anatoly Chernyshev wrote:=20 > > Data science people swear it's just a matter of the size of training se= t used... > They lie. In machine learning overtraining is as much a problem as=20 > undertraining. The simplest example from mathematics is polynomial=20 > interpolation becoming unstable with higher orders.=20 >=20 > And this does not even touch contradictory samples requiring retraining= =20 > or time constrained samples etc. > > I did also a few tests on some simple chemistry problems. ChatGPT looks= like a bad but diligent student, who memorized the formulas, but has no cl= ue how to use them. Specifically, units conversions (e.g. between mL, L, m3= ) is completely off-limits as of now. > One must remember that ChatGPT is nothing but ELIZA on steroids.=20 >=20 > https://en.wikipedia.org/wiki/ELIZA > --=20 > Regards,=20 > Dmitry A. Kazakov=20 > http://www.dmitry-kazakov.de For what it's worth on the subject of the chatbot, " "Produce Ada code for = solving a quadratic equation." is a terrible choice for a test of ChatGPT = as one is asking if it can do a Google search. To test its abilities, you = have to pick a challenge that cannot be solved with a Google search. My short assessment of ChatGPT, with the history of chatbots, are available= on my February blog post. I gave it a simple programming problem and it f= ailed 3 times out of 4. It's not surprising as I've learned since February= that the chatbot doesn't actually understand programming: it uses examples= off the Internet and tries to predict what you might have typed based on k= eyword patterns. It is an imitation of an imitation, smoke and mirrors. T= his is why Vint Cerf denounced it. You can read my thoughts on my blog: https://www.pegasoft.ca/coder/coder_february_2023.html Ken Burtch