• Content Type

Can AI be trusted to help out with state admin and primary care diagnosis?
  • Author
    Posts
  • Up
    0
    ::

    Faced with the prospect of having to implement spiralling tax rises to maintain services for an ageing UK population, Jeremy Hunt, the UK Chancellor, recently announced plans to reduce the costs of government department administration and primary care medical diagnosis by replacing human workers with AI.

    This is no idle dream as Microsoft, Google, Amazon and other large corporations are already building unregulated AI into their tools so that fewer humans will be required to operate them. Meanwhile, others in the commercial world are gradually switching from annoying flowchart and rule-based website chatbots to ones with ChatGPT style capabilities that can work out resolutions to customers’ queries rather than simply passing on the closest-matching pre-defined FAQ level answers.

    Having said that, the Chancellor’s apparent support for the early roll out of AI seems to fly in the face our PM’s stated ambition for the UK to take a leading role in the safe implementation of AI across society.

    A key problem with replacing costly humans with relatively cheap AI is that the trend will be almost impossible to reverse. For, once AI starts performing the work of colleagues who have been laid off, the humans who remain will not have the capacity to rework machine output and will be obliged to trust its accuracy. The effect will presumably spiral as bosses welcome the competitive edge of being able to perform the same tasks with fewer employees and go on to embrace seemingly ever more capable (yet unregulated) AI.

    A key question, therefore, is how risky is it to employ Large Language Model ChatGPT type self-learning AI to perform administrative and medical diagnostic tasks?

    I’ve had a lifelong fascination with thinking machines and spent much of my career developing AI systems capable of assisting with medical diagnosis. Recently, I assessed ChatGPT and Bard against safety criteria my team developed for keeping patients safe from harm. Here’s a key point summary of what I was looking for and what I found:

    Safety Tests performed on ChatGPT and Bard

    a) Is there evidence the robot fully understands complex natural language queries?

    Both ChatGPT and Bard have deceptively strong language capabilities and appeared to understand moderately complex questions. However, both produced numerous erroneous and invented answers when asked to analyse blocks of user text.

    b) Does it seek clarification of input data?
    Neither robot sought clarification of input questions, even when later having to apologise for producing factually incorrect answers. The ability to seek clarification from the user is important in problem-solving. For example, taking a medical context, it is important to know where and when a patient experiences pain of a particular type.

    c) Does it search for reliable source data?
    Although both robots seemed to have some means of assessing the quality of their training data (perhaps simply sources with high frequency of public access eg Wikipedia or repetition of a theme), both were using internet sources and occasionally generated answers containing factual errors. I imagine the developers are attempting to mitigate this over time by asking users to feed back errors which the robots will somehow process.

    d) Does it ask for additional information about a question to help hone its answers?
    Neither robot currently asks the user for further information. Instead, both give full answers to questions, perhaps with the idea of pre-empting a follow-on question that currently would confuse it (see below).

    e) Are its answers reliable?
    As above, both produced factual errors to knowledge questions and became terribly confused when performing detailed analyses of user text.

    f) Does it cite its sources?
    Neither robot initially cited its sources which made it difficult to error trace. More recently, I have come across ChatGPT citing internet sources, although unable to explain where particular ‘alleged’ facts came from.

    g) Can it hold a thread?
    Neither robot appears able to remember or recall the thread of a discussion, which is a fundamental requirement for any advanced system hoping to respond to human conversation. Human conversations build meaning as they progress.
    h) Does it build on previous output?
    Neither chatbot currently appears capable of developing an argument, which is an important component of problem-solving. For example, taking another medical context, it might be important for the robot to take previously mentioned symptoms into account in a later answer.

    i) Does it know when it is wrong?
    Neither robot appears to have any insight as to when it is generating false answers. Alarmingly, even though there is a disclaimer, in practice both seem to believe they always tell the truth. There seems to be some blurring of their capabilities to extrapolate likely truth from established facts with creative invention to fill holes in knowledge.

    j) Does it know when it is acting beyond its knowledge base?
    Both robots appear to make up approximate answers to questions where they don’t have sufficient information to give accurate answers. These may be based on similarities to other topics. In a test where they analysed my text (simulation of a customer problem), both made up text they alleged I’d written. This can be dangerous in problem-solving. For example, in a medical diagnostic context where the robot doesn’t realise the answer it has given is based on weak pattern match of symptoms ie, it has made a wild guess based on insufficient information.

    k) Does it always give an answer, whether exceeding its training or not?
    As above, both robots gave answers whether exceeding their capabilities or not.

    l) Does it know the certainty of its advice?
    Both robots discussed options in their full answers but neither gave an indication of the certainty of the points being made.

    m) Does it learn from its mistakes?
    AI entities designed to teach themselves how to play games often do so by playing against themselves until they know the best move in any particular situation. Expert Human chess players carry out a similar process when they recall the sequences of opening moves that have succeeded in the past.

    Bard and ChatGPT donā€™t appear to be able to learn directly from mistakes made during interactions with individual users. However, as mentioned, the developers are encouraging as much feedback as possible, so it seems likely that reported errors will be processed later to detect areas where the robotsā€™ knowledge and behaviours need to be improved. But, given the huge scope of these entities’ knowledge sources, their self-determining mode of operation, and their ever-expanding usage, such firefighting of errors might prove virtually impossible for humans to police after mistakes have been made. Who knows enough to say which ā€˜robot truthā€™ is entirely accurate?

    Conclusions
    ChatGPT and Bard are widely cited as being cutting edge focal points of current AI technology. Both are proficient at understanding nuanced free-text user questions and translating these into appropriate internet queries. They appear to extract and grade key points from the results of their searches which they assemble into disarmingly authoritative-looking answers that unfortunately give no clue about the strength of the evidence behind their inclusions and omissions.

    Indeed, such robots appear to use their skills for invention to fill gaps in their knowledge by creating what I call ā€˜robot truthā€™ rather than asking for further clarification or additional information. They also seem to lack mechanisms for detecting when they are blindly passing off misinformation as truth or basing their answers on incomplete or erroneous data.

    A critical omission from the underlying model is its inability to hold conversations that take into account information that has already been exchanged with the user or might yet be needed to provide the user with the best possible answer. Ironically, this is exactly what the elementary flowchart-driven chatbots can do, but only within the scope of the knowledge embedded in their charts.

    Similarly, intelligent chatbots are not yet sophisticated enough to argue their case. Instead, they seem to mitigate their inability to hold threads of conversation by providing detailed answers which may preempt the need for users to engage more than once or ask follow-on questions.

    Although artificial brains like Bard and ChatGPT are great fun to interact with, they are currently more like phenomenally expensive toys than thinking machines designed to take responsibility for their actions. Of course, this might not matter for those who find their quirkiness amusing, but others who have no idea about their precise strengths and weaknesses may soon be forced to rely upon them in a world where organisations are already trumpeting the replacement of their human staff with robot entities to save money.

    Even though these robot entities appear superhumanly powerful and speedy and donā€™t have to be fed or paid, they are missing basic human insights about their own performance that render them potentially dangerous sources of widely disruptive misinformation. In my view, it really is time for their developers to put accuracy and user safety above their almost sacred need to maintain the free-thinking purity of the current underlying AI model.

You must be logged in to contribute to the discussion

Login