CoreML in Practice: Fraud Detection for Financial Institutions
Using machine learning in the financial sector can enhance security by optimizing processes like fraud detection. In this post, we'll create a practical example using CoreML to detect possible fraud in financial transactions. We'll implement two functionalities using Image Classifier and Natural Language Processing (NLP), all demonstrated with Swift for an iOS app.
1. Scenario
- Objective: Create an app to detect fraud based on:
- Images: Document fraud detection using an Image Classifier.
- Text: Analyze suspicious messages using NLP.
The app allows:
- Employees to upload a document image for validation.
- The analysis of a message or description to detect potential fraud.
2. Setting Up the Environment
Prerequisites
- Xcode: Ensure you're using the latest version.
- Machine Learning Model: We'll use basic examples:
- Image Classifier: A pre-trained model that detects whether a document is fake.
- NLP: A sentiment analysis model trained to classify messages as "fraudulent" or "legitimate."
You can create your own models or use pre-converted .mlmodel
files.
3. Creating the Project
-
Open Xcode and create a new project:
- Choose App and configure:
- Name:
FraudDetector
- Interface: SwiftUI (or UIKit, if you prefer).
- Language: Swift.
- Name:
- Choose App and configure:
-
Add the
.mlmodel
files:- Drag the files into the project.
- Ensure the target options for the models are checked.
4. Image Classification (Fake Documents)
Step-by-Step:
1. Load the Model
Ensure the model was added correctly. Let's assume the model is called DocumentClassifier
.
2. Convert Image to CVPixelBuffer
Add an extension to convert images (UIImage
) into the format required by Core ML:
import UIKit
import CoreML
extension UIImage {
func toPixelBuffer() -> CVPixelBuffer? {
let image = self
let width = 224
let height = 224
let attrs = [
kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue!,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue!
] as CFDictionary
var pixelBuffer: CVPixelBuffer?
CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32ARGB,
attrs,
&pixelBuffer)
guard let buffer = pixelBuffer else { return nil }
CVPixelBufferLockBaseAddress(buffer, .readOnly)
let context = CGContext(data: CVPixelBufferGetBaseAddress(buffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(buffer),
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
guard let cgImage = image.cgImage else { return nil }
context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
CVPixelBufferUnlockBaseAddress(buffer, .readOnly)
return buffer
}
}
3. Make a Prediction
import SwiftUI
import CoreML
struct DocumentAnalysisView: View {
@State private var documentImage: UIImage?
@State private var predictionResult: String = "No result yet"
var body: some View {
VStack {
if let documentImage = documentImage {
Image(uiImage: documentImage)
.resizable()
.scaledToFit()
.frame(height: 300)
} else {
Text("Upload a document image")
}
Button("Upload Image") {
// Code to select an image (not detailed here)
}
Button("Analyze Document") {
if let image = documentImage?.toPixelBuffer() {
analyzeImage(buffer: image)
}
}
Text(predictionResult)
}
.padding()
}
func analyzeImage(buffer: CVPixelBuffer) {
do {
let model = try DocumentClassifier(configuration: .init())
let prediction = try model.prediction(image: buffer)
predictionResult = prediction.label // E.g., "Fake" or "Legitimate"
} catch {
predictionResult = "Error analyzing document"
}
}
}
5. Text Analysis (Suspicious Messages)
We'll use a model that classifies messages as "Fraudulent" or "Legitimate."
1. Model in the Project
The model will be named TextSentimentClassifier
.
2. Implementation
import SwiftUI
struct TextAnalysisView: View {
@State private var message: String = ""
@State private var analysisResult: String = "No result yet"
var body: some View {
VStack {
TextField("Enter the message", text: $message)
.textFieldStyle(RoundedBorderTextFieldStyle())
.padding()
Button("Analyze Message") {
analyzeMessage(text: message)
}
Text(analysisResult)
.padding()
}
.padding()
}
func analyzeMessage(text: String) {
do {
let model = try TextSentimentClassifier(configuration: .init())
let prediction = try model.prediction(text: text)
analysisResult = prediction.label // E.g., "Fraudulent" or "Legitimate"
} catch {
analysisResult = "Error analyzing message"
}
}
}
6. Final Integration
Combine both functionalities into a single interface with tabs or navigation, so users can choose between:
- Analyzing Documents.
- Analyzing Messages.
Example of Navigation with SwiftUI:
import SwiftUI
@main
struct FraudDetectorApp: App {
var body: some Scene {
WindowGroup {
TabView {
DocumentAnalysisView()
.tabItem {
Label("Documents", systemImage: "doc.text.magnifyingglass")
}
TextAnalysisView()
.tabItem {
Label("Messages", systemImage: "text.bubble")
}
}
}
}
}
7. Expected Results
- When uploading a document image, the app will indicate whether it is fake or legitimate.
- When typing a message, the app will identify whether it is suspicious or trustworthy.
💡 Extra Tips:
- Test the models with real data.
- Use Instruments in Xcode to analyze Core ML performance.
Now, with the power of Core ML, you're ready to take fraud detection in banking apps to the next level! 🚀