
What every Belgian SME should learn from the hack on Orange: cybersecurity fails not because of technology, but because of mindset
August 20, 2025
Why Security Awareness Training is the Rescue Tool for SMEs in a Digital World Full of Threats
September 7, 2025In July 2025, it was first found that students used ChatGPT during the medical entrance exam in Flanders, despite security measures such as disabling Internet access. This scandal brought acute attention to exam integrity and raises important questions about the robustness of technical and operational safeguards, especially in temporarily set-up exam environments where IT resources are limited. In this blog, IT and cybersecurity experts highlight the risks of AI-assisted fraud, evaluate existing deficiencies and explain how organizations can ensure integrity with smart measures, even in small-scale exams. You’ll learn what technologies and processes, at a minimum, are necessary to effectively keep ChatGPT and similar systems out of physician training access depots.
1. Context: the incident and the cybersecurity challenge
In July 2025, three candidates were caught using ChatGPT during the nursing and medicine exam despite the fact that exam booths should have prevented Internet access. This proves that even with supposedly layered security, vulnerabilities remain. In addition, the exam board is currently investigating objections from other students about possible mass fraud and a remarkably high pass rate (47%, versus 19% in 2024). Such incidents demonstrate that exam environments need robust cybersecurity design, even when resources are limited.
2. Technical measures against AI-based fraud
2.1 Secure (lockdown) browser & screen monitoring
Use a secure browser that blocks all access to unauthorized websites, tabs and apps.
Screen captures and screenshots allow suspicious activity to be captured automatically.
Platforms such as Exam.net prevent copy/paste and detect focus-switching
2.2 AI-assisted proctoring and live monitoring
Tools such as Proctor360 offer cameras with 360° views, multi-camera setups, AI behavioral analysis and live intervention capabilities.
Talview combines secure browser, room check from 360°, behavioral analysis and proctor logs.
This hybrid approach reduces risk of fraud as well as enabling human grip action.
2.3 Demand design and multimodality
Avoid reusing questions. Create unique, ever-changing exam questions-so ChatGPT has less opportunity to anticipate.
Insert images and drawings into questions, including embedded text labels; such multimodal questions have been found to significantly weaken ChatGPT.
Use question sets with sequential building blocks that test comprehension and frustrate AI-generated answers.
2.4 Question-fulnerability scoring
Investigate which question types ChatGPT answers least well. Use NLP analysis to measure this by question type and avoid vulnerable questions.
2 3. Operational approach for temporary and small-scale sites
Even in those situations, at least the following measures can be applied:
Preliminary room and device check: well in advance, scan the room and table for hidden devices. A security officer performs this check.
Mobile proctoring kit: a tablet or smartphone with proctoring app (with secure browser + video stream) is already sufficient for small exam cohorts.
Hybrid proctoring: combine live camera monitoring with an attendant; for balance between cost and security.
Log and video usage: keep logs of device usage, behavioral data, video recordings and suspicious events.
Backup paper exams/pen-and-paper: if technology fails, can quickly switch to traditional forms of exam.
2 4. Prevention through processes and awareness
Oral or spoken testing (viva voce): as applied by the University of South Australia, rapid oral questioning of less than 20 minutes can greatly inhibit fraud.
Behavioral analysis and forensic log analysis: an AI human-in-the-loop system helps detect patterns of exam fraud.
Cultural shift to digital ethics: educate candidates about AI tools, their impact and while promoting fair use; it may help as suggested by Reading study on AI-assisted deception.
3. AI-cheat-proof exam framework: 6 layered layers of defense
3.1 Layer 1: Identity authentication & access management
The first layer of defense must absolutely guarantee that the right candidate is taking the exam – and that he/she is not switching or receiving outside help.
Technical measures:
Multi-Factor Authentication (MFA) at login: password + mobile OTP code.
Biometric verification (face match): used before and during the exam to validate identity.
Geofencing via GPS/IP logging: prevents candidates from connecting from outside approved locations.
Randomization schemes for seat or login times to make collusion more difficult.
Relevance to temporary sites:
Easily deployable via cloud-based tools (such as ProctorExam or Talview).
Requires only basic cameras + internet; no heavy infrastructure.
3.2 Layer 2: Device and system hardening
Once identity is confirmed, the device on which the exam is taking place must be under full control. AI interfaces such as ChatGPT must be inaccessible.
Technical measures:
Secure lockdown browser: blocks access to Internet, apps, keyboard shortcuts, copy-paste, print screen.
Completely closed OS profile (kiosk mode): only the exam application is accessible.
MDM policy for temporary devices: checks for USB injections, Bluetooth connections, microphone usage.
BIOS/UEFI security: prevents booting via USB or network drive.
Whitelisting of network traffic: only traffic to the exam server is allowed.
Application to temporary exam rooms:
Use of preconfigured laptops based on “immutable images”.
Network segregation per device via VLAN or private WiFi SSID per row.
3.3 Layer 3: Environmental security & infrastructure control
In addition to the device, the candidate’s physical and digital environment must also be rigorously monitored to rule out hidden AI tools or spy devices.
Technical measures:
RFID, Bluetooth, and Wi-Fi scanning for unauthorized signals (smartwatches, mobile hotspots).
Use of Faraday devices (signal blocking): for example, domes over desks or shielded classrooms.
360° video surveillance or dual camera setup (frontal + overhead).
Ambient noise monitoring to detect voice assistants (such as Siri or Google Assistant).
For temporary spaces:
Portable scanners (such as Flipper Zero or Wi-Spy Air) are ideal for on-site pre-scan.
Backup power & failover internet for surveillance cameras.
3.4 Layer 4: Content AI resistance of exam questions
Even if candidates had access to AI, smart question design can prevent those tools from generating useful answers.
Technical & didactic strategies:
Multimodal questions with pictures, diagrams or charts in which AIs perform weaker.
Cumulative question sets: where each answer depends on prior understanding.
Context-sensitive assignments (e.g., clinical cases with conflicting information).
Dynamic randomization: no two candidates receive the exact same questions.
Support techniques:
AI vulnerability scoring: test which questions are easily solved by LLMs.
Use tools such as OpenAI Detector, GPTZero for pre-screening queries.
3.5 Layer 5: Monitoring & behavioral analysis during examination
Live detection of suspicious patterns is essential for real-time action.
Technical resources:
Live video & screenshare monitoring, combined with AI analysis of facial expressions, eye movement, and mouse/key interactions.
Input monitoring: detection of inhuman typing rates or patterns (burst typing).
Browser focus analysis: records when the candidate clicks out of the exam window.
Audit logging: central recording of all system and user interactions.
For scalable temporary setups:
Cloud-based proctoring platforms with integrated analytics.
Lightweight clients (Chromebook, tablets with external camera).
3.6 Layer 6: Post-exam forensic evaluation & anomaly detection
After the exam, automatic analysis should detect anomalous patterns and escalate potential frauds.
Techniques:
Natural Language Forensics: recognizes output that resembles LLM language structure (repetitions, style).
Statistical comparison models: comparison of individual scores with group mean.
Plagiarism & duplication control on answer structures.
Behavioral forensics: comparison with historical behavioral pattern of the same candidate.
Temporary locations:
Data automatically sent to central server for analysis.
Low latency required to synchronize video and input data in a timely manner.
Conclusion
The observation of ChatGPT use during the medical entrance exam in Flanders shows that technical as well as organizational security layers are crucial-even in small, temporary exam environments. By deploying a block of secure browsers, proctoring (AI assisted and human), multimodal question design, behavioral analysis and clear processes, we significantly reduce the risk of AI-based fraud. Add digital ethics and oral review for maximum robustness. Want support implementing a secure exam environment? Then schedule a call to discuss how Network IT can secure your exam processes against ChatGPT abuse.
















