If I had a quid for every email that comes into my inbox about the use of AI in law, I’d be retiring. Horror stories about lawyers misusing AI, lying about using AI, dire warnings about hallucinations, guidance for judges on using AI, warnings from data protection gurusabout the perils of using AI for your case, relentless offers of software and better legal research tools powered by AI, speeches by judges about the future of AI*, news articles by evangelical professors about how AI will replace us all before too long, dedicated blogs from lawyers attempting to track all the horror stories….and surprisingly few about use of AI by litigants in person.
So, what about litigants in person?
In the family court there are vast numbers of litigants in person (LiPs), mostly in dispute about finances or children after a separation. It stands to reason that LiPs are using AI in preparation for their Family Court hearings. Why wouldn’t they? Everyone else is doing it and its right there at the top of your google search.
Anecdotally, we lawyers are starting to see documents submitted by litigants in person which have a whiff of AI to them. Documents which cite caselaw and rules which we haven’t heard of before or which just don’t match the bewildered rabbit in the headlights we’ve met at court…Documents which raise points out of the blue that have not been mentioned previously…and I’ve certainly received emails from litigants which all have the same ring to them – they all sound like the sort of overbaked spiel you see trying to persuade you to ‘sign this petition’ or ‘donate to this important crowdfunder’ more than they sound like a real human being speaking from their own experience. I reckon that a fair proportion of those have been ‘improved’ by AI before being sent. They become as easy to spot as a bogus reel where everyone has an extra finger (or in one I saw this morning a whole extra arm).
I can well see that if you are faced with the daunting prospect of a family court case without a lawyer, particularly if you are dealing with the other party’s lawyer spouting legalese at you, it would make sense to try and level the playing field by using AI – give as good as you get, right? After all, its free, so why not? Can’t hurt, can it? Steady on there, not so fast. Read the small print before you do it, please…
I’m not going to tell you not to use AI at all. But I am going to tell you to be very careful about how you use AI in preparing for family court, because I think it might give you false confidence, could end up getting you into trouble, and could even damage your chances of getting a good outcome. Let me explain why I say that.
First up: confidentiality
One of the things featured in guidance for lawyers about appropriate use of AI, are stern reminders that when you feed your case details into AI you are effectively publishing it. Whatever you put in, AI will gobble it up and store it and draw on it in future as part of its information bank to help it answer future questions. Who knows where snippets of your case may re-emerge in future? That’s a problem for lawyers who have a duty of confidentiality, and it’s a problem for you in family court because family court materials are confidential, and there is a prohibition on sharing or ‘publication’ of private information (and ‘publication’ for the purposes of the rules around Family Court is very widely defined and essentially includes any sharing that is not specifically permitted). By using Chat GPT or other open AI platforms you could be breaching those rules, and could be in contempt of court (which can result in a prison sentence or fine – yikes!). Don’t confuse this with lawyers and judges using paid for, closed system AI – where information and queries fed into the system are kept within that closed system and do not go into wider circulation – lawyers and judges should no more be using open AI than you.
To be clear – asking AI general questions like ‘what is the law about contact with a child in England and Wales?’ or ‘how can I persuade the judge to allow me overnight contact?’ or ‘how seriously does the family court treat domestic abuse?’ shouldn’t cause any difficulty (though how much help the answers will be is another matter – and the answers will very much depend on the prompt. ‘what is the law about contact with a child in the UK?’ might (should) get different results to the same query about England and Wales instead of the UK).
The problem with confidentiality though, arises when you start asking questions with reference to the specifics of your case and family or the contents of court documents. Once they go in the box on your screen and you hit return you lose control of that information forever.
Dealing with criticism (and the possible threat of punishment) for inappropriately sharing private information is going to be an unhelpful distraction from the real issues in your family court case. So, rule one is: don’t feed details of your specific case into AI. If you use it ask it general questions only – but remember the answer is only as good as the prompt.
Second: remember AI isn’t infallible
Most people have heard of AI hallucinations. They aren’t always easy to spot – particularly when you are asking them to provide answers about legal things, which you might result in answers using terminology you are not familiar with in the first place – what looks plausible to a non-lawyer might be immediately and obviously bogus to a trained eye. That is tricky to safeguard yourself against.
This is a particular problem with caselaw citations (by which I mean a reference to a case which contains guidance). AI is now becoming well known for producing fake cases, fake quotes from real cases, and for using caselaw to support a particular point, when really the case doesn’t provide valid support at all. This seems to be catching out a surprising number of lawyers (who should know better, and they are getting into very hot water about it), so have a think – are you going to be any better placed to spot a hallucination than they are?
So, if you use AI and want to use the caselaw it suggests, you are going to need to know how to check it is real and accurate, which is easier said than done for a non-lawyer, partly because you are going to have to rely on publicly available free sources (which enables you to access a lot but not everything).
You will need to learn how to use a case citation to find and check a case (the unique reference for each case usually in a format a bit like this: PARTY v PARTY (SUBJECT) [2025] EWFC xxxx (the title of this post isn’t a real citation by the way). And you need to know where to look and how to double check your case and the particular quote – many cases can be found on www.bailii.org, but the search engine is glitchy and if the citation given is to a particular law report rather than what’s called the ‘neutral citation’ it may not come up. Once you’ve found the case you need to check the quote is actually there.
And then – this is the harder part – you need to try and work out if the case is actually suitable to be relied on for the point the AI is suggesting it does.
Check that the quote relied upon isn’t taken out of context. Read the whole case. If the AI has picked a single line from a dissenting judgment (a judge who disagrees with the majority in an appeal) when the appeal judgment goes the other way, that probably isn’t going to be a case that supports your position.
But to properly check if the case you are thinking of relying on is ‘authority for the proposition’, you also need to understand how the law of precedent works, and which judgments are authoritative and what for is a skill that lawyers learn in law school. That is not something I can easily summarise in a short blog post in a way that is easy for a novice to apply. And it is also not a skill that AI possesses (hence why it might quote from a dissenting judgment or point you to an authority that has been overturned on appeal, without warning you that the case is no longer ‘good law’).
So, never assume that the caselaw AI is suggesting you use, refer to or quote is real, reliable or relevant. It might not be.
And…even if you check your references and it’s a real quote from a real case your case might not be a sound authority for the point you need to make. How will you deal with that if it comes up at court?
Third: it’s your responsibility to check your sources
The reason why you need to check it should be clear from what I’ve already said. But actually the reason is twofold – its partly because you might end up in court with a false sense of confidence, thinking that your document contains the ‘silver bullet’ that will win your case, when in fact it is no use to you at all – and you’ve wasted your energy on that rather than focusing on the facts and the arguments you could make based on your knowledge of your child and a bit of common sense. Sometimes – often in children cases – those arguments are as powerful if not more powerful than all the caselaw in the world. Do not let AI distract you from knowing your case. If you want to apply some law look at s1 of the Children Act 1989 (freely available on legislation.gov.uk and apply that to the facts of your case. The same goes for s25 Matrimonial Causes Act 1973 if your case is about finances on divorce.
But it’s also partly because when you submit a document to the family court you will usually be expected to sign it with a statement of truth. If you haven’t checked something produced by AI how can you say its true? Relying on fake or wrong citations takes up court time and has potential to lead the judge into a wrong decisions – judges are taking an increasingly hard line to use of AI where it is unchecked by the person relying on it. Mostly the examples of this are by lawyers, who should know better, but it is going to be no excuse to say ‘I didn’t realise’ or ‘Chat GPT told me’ just because you don’t have a lawyer. You ran the AI, you signed the statement of truth. You should have checked.
In cases where a court has been misled or nearly misled and its taken up valuable court time trying to work out what has gone wrong (for example where the judge or other party’s lawyers smell a rat and have to waste time trying to find cases which don’t exist), courts have ordered the lawyer to pay the costs wasted on sorting it out. There is obvious potential for that to happen to a litigant in person. The more cases which come up, and the more news coverage there is of the risks of using AI, the less likely it is that the court will cut you some slack. Caselaw is clear that you don’t get a free pass just because you are a litigant in person.
It’s also tempting to feed your draft witness statement into the AI machine to see if it can improve what you’ve said. But a witness statement is meant to be your evidence – its not a speech or an argument, its meant to be your account of what you saw, heard, said and did – and it is supposed to be in your own words. Do not be tempted to let AI ‘improve’ your witness statement. Quite apart from the fact that you are sharing your private information with the world, you will end up with a statement that doesn’t sound like you and which you will struggle to persuade the judge is your authentic account. Do not be tempted to let AI massage the facts so that they are inaccurate or to use words or phrases you never would (especially ones you don’t understand). Far better for your statement to be written how you speak, and to say it how you see it. I think this is more important in Family Court than almost anywhere else – the decisions Family Court judges have to make are about unique families, unique children and unique situations. The judge needs to hear from the real human, not your campaign manager. Your unique voice, warts and all, is your secret weapon.
Fourth: be up front about using AI
It isn’t an explicit feature of the Family Court rules, but use of AI is very much on the radar of judges lawyers and rule makers. I think it’s wise to say if you have used AI right from the off. You might think your document sounds very impressive but the more it sounds like a lawyer the less it sounds like you – and the harder it will be for you to speak to it i.e. to talk to the judge about it and to answer questions about it. Lawyers and judges will smell a rat if they think you haven’t written the document yourself, and are likely to ask you questions about the checks you have made to verify the accuracy of the arguments and caselaw, and about whether your statement is really your own words.
Knowing you have used AI will help the judge to understand why, when you stand up and try to address the judge, you maybe don’t sound at all like you do on the page. If you struggle to say anything under pressure and clam up or mangle your sentences (we all do it sometimes) the judge will understand the mismatch rather than possibly thinking you are putting on a show.
It will also avoid you being put on the back foot for not disclosing use of AI voluntarily.
If you haven’t declared the fact you have used AI and explained how you have used it, and the court later works out that you have – the court may be skeptical about the reliability of your evidence or submissions. In a case where your honesty and credibility is important this is not a good look.
Far better to say that you have used AI to do, to briefly explain how you have used it, to set out what checks you have carried out to ensure the information is correct, and to confirm that where you are giving evidence about facts you are using your own words. And to reassure the court you have not fed any court documents or confidential information into the AI – before someone else raises it when you are on the back foot.
Fifth: you need to understand it to use it
Lawyers will tell you that every so often a document will do its job without you ever having to open your mouth (for example the other side will concede a point or change their position without arguing the point, because they realise you are right). But that is vanishingly rare, and most of the time the process only begins with the written documents. These are your springboard, and you need to be able to use the document in court – to explain it, amplify it, and to align your actions in court with it.
Of course many arguments sound better on paper, when they are able to put their words together in their own time, and before they have been picked apart, but the real art of persuasion is in the combined effect of your written documents and what you say about then to the judge – so that the judge can see the words brought off the page – reading the words off the page aloud in court like a script is not as persuasive as explaining why you have written what you have.
A skeleton argument is a document that is meant to outline your arguments – the arguments you will make in court – using your words, and adjusted in light of the evidence the court has heard. If the words on the page are brilliant spaghetti to you, how will you answer questions about them? How will you work out when you need to adjust your position because the evidence doesn’t match up with it?
OK, you’ve freaked me right out. So what can I use instead of AI?
I mean, you could always use a lawyer? I’m being slightly serious here. Most people who use AI are probably doing so because they can’t afford a lawyer, and I am afraid I can’t solve that problem for you – though I’ve suggested some resources below.
But some people may well think that if they have AI they don’t need a lawyer – so why waste money on one? I mean, maybe so. But on no account confuse AI with the skills and experience of a lawyer, particularly a lawyer with a specialism in a particular field. AI has never been in a courtroom, has never had to pivot in the moment in the middle of a trial because something unexpected happened, and definitely doesn’t know that judge in Blankshire County Court with the unusual approach. You cannot make yourself into a pocket lawyer with a spot of AI.
‘OK, but’ (I hear you ask) ‘can I use AI and have a lawyer?’ Sure you can. But if you keep bringing the results of your AI searches to your lawyer to confirm, unscramble or to explain why the AI is talking baloney, you are going to burn money fast. Every time you present the latest theory or citation that AI has come up with your lawyer will need to check them out and if they turn out to be wrong, will need to explain why they are wrong. You are paying for that. More efficient to just pay for their tailored advice from the off, and trust them to discard the irrelevant and present you with what really matters.
Second guessing your lawyer with AI will not make for a productive or efficient working relationship. If your lawyer hasn’t mentioned something, it might be because they forgot or didn’t know about the Get What I Want Quickly Act 2025, but it also might be because they know their stuff and it was not helpful or relevant to discuss. It’s your money, and lawyers are not infallible, but generally lawyers work best when you have a little faith in them. If you aren’t sure you understand their advice, just ask them to explain it or simplify it.
There are loads of reputable websites which can give you general information about the legal framework that applies to your case, and I would recommend you check those instead of AI – or at least alongside it. They will give you an accurate, intelligible explanation of the law, and of court procedure that you can use to work out your position and arguments, arguments that you can express in your own words and voice, and which will be far more persuasive than parroting lawyery sounding words you don’t understand.
Try looking at some of the resources on www.familycourtinfo.org.uk for instance, or the AdviceNow guides. You can also visit www.transparencyproject.org.uk or look at the Family Justice Council Guidance page. The court rules can be found here, the Children Act 1989 here, and the Matrimonial Causes Act 1973 here. You could also buy a copy of my book The Family Court without a Lawyer – A Handbook for Litigants in Person from the lovely folks at Bath Publishing https://bathpublishing.com/ .
You could also visit Rights of Women https://www.rightsofwomen.org.uk/get-advice/ , Family Rights Group https://frg.org.uk/ or see if Advocate www.weareadvocate.org.uk can help you with advice and / or representation.
If you are worried about silly mistakes in your written documents, you can use the tools in your word processing software (Microsoft word) to make sure you don’t have spelling mistakes or punctuation errors, or to spot any syntax errors (where the words in a sentence are in the wrong order). But really, as long as what you say makes sense and is presented in a logical order, no judge is going to penalise or criticise you for a few typos or for expressing yourself clumsily. You don’t have to mimic a lawyer to win your case, and often you are more persuasive when you are just yourself.
*I’ve linked to the 2025 Mayflower lecture, which I know happened in November because I attended it. However, google AI insists it has not happened and no Mayflower lecture is scheduled for 2025, which just goes to prove my point. Unfortunately the speech has not been published yet, but it should be up at this location or on the judiciary website soon – I recommend reading it.

0 Comments