Skip to main content
4 of 6
tweak `startNewLine` signature
Rob
  • 2.7k
  • 17
  • 27

The major observation here is that CanvasButtonsViewModel is a UIButton subclass, which is, by definition, a “view”, not a “view model”. I would suggest renaming this to CanvasButton.

There are two salient aspects to a “view model” within MVVM that are missing here:

  1. A view model is generally an abstract type, not tied to any particular UI framework (not UIKit, not AppKit, etc.). The goal is to have an abstract type that encapsulates all of the model data used by a particular view (e.g., the sets of coordinate pairs, colors, and widths that constitute the abstract representation of the diagram).

    E.g., the view controller might therefore have a view model that captures the current diagram (as outlined above), but also capture the state-related variables (current color, the default line width for new strokes, etc.).

    Now, at this nascent stage of your project, this abstraction might offer modest benefit, but when you start adding facilities to saving and loading drawings from persistent storage, constructing smoothed renditions of paths, etc., this sort of abstraction starts to bear fruit, avoiding the cluttering of the view layer (the UIView and UIViewController) with all this ancillary stuff.

    This abstraction of the view model layer into a separate object is quite easy and natural, and makes it easier to reason about as the app grows in complexity.

    That having been said, you appear to be contemplating a view model for the button that consists of basic configuration information (the image, width, height, and the background color). Those are basic properties of any button, and really do not warrant a separate view model for the button itself, IMHO. You can, if you really want, but there is very little value in doing so. Personally, I would lose the “view model” of the button, and focus on the broader “view model” of the view (i.e, the lines, the default color for new lines, etc.).

  2. There is a second dimension to MVVM, beyond the mere “view model” abstraction. Namely, MVVM generally is understood to entail a “data binding” between the view and the model. The idea is that you hook it up once, and any changes in the model will be automatically reflected in the view and vice versa.

    This aspect of MVVM is a less natural fit in UIKit without resorting to third-party binding libraries. (It is, however, a very natural fit with SwiftUI with its ObservableObject types, Bindable linkages with UI controls, and in its newer Observation framework. But that is a separate topic altogether.)

    That having been said, while there are various ways to achieve this sort of binding, one technique in UIKit apps is to use closures which you specify during the construction of the view in the view controller:

    • You might have a closure that the view uses to inform the view model of user input (e.g., a gesture started, a gesture continued, etc.);
    • You might have closure in the view model to inform the view when it needs to be re-rendered because the model changed; and
    • You might have closures on your buttons to inform the view model that the user tapped on the button, triggering some update in the view model.

    Note, after doing this, we no longer need to set targets for button clicks or the like. We just configure the view (including all of these closures which are the glue for our MVVM), and you are done.

So it comes down to whether you are simply looking for the “view” vs “view model” abstraction, or whether you are aspiring for a complete MVVM implementation. But here is a closure-based rendition:

The view controller:

class VideoEditorViewController: UIViewController {
    private let viewModel = VideoEditorViewModel()
    private let canvas = CanvasView()
    private let picker = UIColorPickerViewController()

    override func viewDidLoad() {
        super.viewDidLoad()
        setCanvasUI()
    }
}

// MARK: - Private interface

private extension VideoEditorViewController {
    func setCanvasUI() {
        view.addSubview(canvas)
        canvas.backgroundColor = .clear

        bindViewModelToUpdateCanvas()
        bindCanvasToUpdateViewModel()
        addButtons()
    }

    func bindViewModelToUpdateCanvas() {
        viewModel.onUpdateDrawing = { [weak self] lines in
            self?.canvas.lines = lines
        }
    }

    func bindCanvasToUpdateViewModel() {
        canvas.onStartGesture = { [weak self] point in
            self?.viewModel.startNewLine(at: point)
        }

        canvas.onContinueGesture = { [weak self] points in
            self?.viewModel.appendToLastLine(points)
        }
    }

    func addButtons() {
        let size = CGSize(width: 50, height: 50)

        let undoButton = CanvasButton(size: size, image: UIImage(systemName: "arrowshape.turn.up.left.fill")) { [weak self] _ in
            self?.viewModel.undo()
        }

        let colorPicker = CanvasButton(size: size, image: UIImage(systemName: "paintpalette.fill")) { [weak self] _ in
            guard let self else { return }
            picker.delegate = self
            present(picker, animated: true)
        }

        let trashCanButton = CanvasButton(size: size, image: UIImage(systemName: "trash")) { [weak self] _ in
            self?.viewModel.undoAll()
        }

        let uploadViewButton = CanvasButton(size: size, image: UIImage(systemName: "envelope")) { _ in // [weak self]
            print("upload not implemented")
        }

        let toggleDrawing = CanvasButton(size: size, image: UIImage(systemName: "pencil.circle")) { [weak self] _ in
            self?.viewModel.isDrawable.toggle()
        }

        let stackView = UIStackView(arrangedSubviews: [
            undoButton,
            trashCanButton,
            colorPicker,
            uploadViewButton,
            toggleDrawing
        ])
        view.addSubview(stackView)

        stackView.axis = .vertical
        stackView.bringSubviewToFront(view)
        stackView.spacing = 30
        stackView.snp.makeConstraints { make in
            make.right.equalTo(view.snp_rightMargin).offset(-20)
            make.top.equalTo(view.snp_topMargin)
        }
        canvas.frame = view.frame
    }
}

// MARK: - UIColorPickerViewControllerDelegate

extension VideoEditorViewController: UIColorPickerViewControllerDelegate {
    func colorPickerViewController(_ viewController: UIColorPickerViewController, didSelect color: UIColor, continuously: Bool) {
        viewModel.strokeColor = picker.selectedColor
    }
}

Note, no @IBAction or @objc functions. Just hook all of the bindings with closures, as outlined above.

Its view model:

class VideoEditorViewModel {
    var strokeColor: UIColor = .blue
    var strokeWidth: Float = 10
    var isDrawable: Bool = true
    private(set) var lines: [Line] = [] { didSet { onUpdateDrawing?(lines) } }
    var onUpdateDrawing: (([Line]) -> Void)?
}

extension VideoEditorViewModel {
    @discardableResult
    func undo() -> Line? {
        lines.popLast()
    }

    func undoAll() {
        lines.removeAll()
    }

    func startNewLine(at point: CGPoint?) {
        let line = Line(
            strokeWidth: strokeWidth,
            color: strokeColor,
            points: [point].compactMap { $0 },
            drawable: isDrawable
        )
        lines.append(line)
    }

    func appendToLastLine(_ points: [CGPoint]?) {
        guard let points else { return }

        let index = lines.count - 1
        if index >= 0 {
            lines[index].points.append(contentsOf: points)
        }
    }
}

CanvasView:

class CanvasView: UIView {
    var lines: [Line] = [] { didSet { setNeedsDisplay() } }

    var onStartGesture: ((CGPoint?) -> Void)?
    var onContinueGesture: (([CGPoint]) -> Void)?

    override init(frame: CGRect = .zero) {
        super.init(frame: frame)
        isUserInteractionEnabled = true
    }

    required init?(coder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }

    override func draw(_ rect: CGRect) {
        super.draw(rect)

        lines
            .filter { $0.drawable }
            .forEach { line in
                let path = line.simplePath
                path.lineJoinStyle = .round
                path.lineCapStyle = .round
                path.lineWidth = CGFloat(line.strokeWidth)

                line.color.setStroke()

                path.stroke()
            }
    }
}

// MARK: - Touches

extension CanvasView {
    override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
        onStartGesture?(touches.first?.location(in: self))
    }

    override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
        guard let touch = touches.first else { return }

        if let touches = event?.coalescedTouches(for: touch) {
            onContinueGesture?(touches.compactMap { $0.location(in: self) })
        } else if let point = touches.first?.location(in: self) {
            onContinueGesture?([point])
        }
    }
}

Note the CanvasView (formerly just Canvas) just renders a view and captures user input.

Now, there are different ways to skin the cat, but this is an example of (a) using closures to bind view models and views; and (b) using view models to abstract implementation details from the views. There are lots of ways of achieving the same, but hopefully this illustrates the idea.

I also created a Github repo with the above. If you look at the commit history, you can see the evolution from your original code, to some cosmetic changes, to a few bug fixes, to a rendition where I created a view model for just the CanvasView, to one where I created a view model for the whole view controller.


There are some minor issues in your original code snippet, FYI:

  • You are constructing the stack view in viewDidAppear. Each time it appears, you are going to create a new stack view. This should be done only once, in viewDidLoad.

  • The color picker is presenting the picker and as soon as that presentation is complete, you are setting the canvas color. This should be moved into the UIColorPickerViewControllerDelegate method colorPickerViewController(_:didSelect:continuously:).

  • You are not using UIImagePickerControllerDelegate, so that conformance declaration (esp where you do not implement any of these methods) is unnecessary.

  • In touchesBegan, you really should capture where the gesture started. Right now, you are discarding the first point of the gesture.

  • Where you are extracting the position from Set<UITouch>, you really should specify the view as self, otherwise you will be getting coordinates in the window’s coordinate system, not your current view’s coordinate system. (You might not notice if the view is located at (0, 0), but if it was not, you would not get the coordinates you expected.)

  • Where extracting updates in gestures, you should probably consider coalesced touches, for more fine-grained details on touch information (on capable hardware). It’s not relevant to the broader MVVM questions, but a consideration when processing touches.

  • Personally, I would also capture predictive touches to reduce lag in the UI (but you need a slightly more refined model to capture this dimension). See https://stackoverflow.com/a/64709364/1271826 or https://stackoverflow.com/a/58456179/1271826.

  • When rendering gestures, I would also be inclined to smooth them via Catmull Rom or Hermite splines. See https://stackoverflow.com/a/34583708/1271826. Again, unrelated to the broader MVVM question, so I will leave that to the reader.

  • I must confess that it offends my MVVM-purist tendencies that a model object (like Line) is still entangled with UIKit. This is because of the UIColor property. I might be inclined to abstract this to some non-platform-specific structure, but, again, that is beyond the scope of this question.

  • I did not grok the purpose of the drawable property of Line. I attempted to preserve that in my example, but I am not sure what the utility of capturing a Line that was not drawable. I interpreted this as a path that was not to be rendered, but that doesn’t make sense IMHO, so I apologize if misunderstood your intent. But, nonetheless, I attempted to preserve the idea. It is not quite relevant to the broader question, but FWIW.

    But as a general design principle, many of us avoid entangling model objects with platform-specific types like this. FWIW.

Rob
  • 2.7k
  • 17
  • 27