Skip to content

Polymorphism doesn't work #62

@gvvaughan

Description

@gvvaughan

In Specl, I have a Process object that is produced by running a specl.shell subprocess, and which in turn has a selection of special matchers (.to_exit (0), .to_output "stdout", .to_fail_with "stderr", etc) which check that they have received a process object.

In Zile, I have an Editor object, which is a clone of specl.shells Process produced by running some zlisp code in a zemacs subprocess, but with additional fields for buffer and minibuffer contents upon completion. I'd have matchers that are specific to an Editor (.to_match_minibuf "C-x C-g is undefined", etc) which check they have received an Editor object. But, the specl.shell Process matchers would still be useful, and Editor is derived from Process so they would work as expected, if not for the fact that they reject non-Process objects.

I have some options for how to provide this:

  1. Derived prototype objects that need to be accepted by functions designed for parent objects, must not set a new _type name. This is what I'm doing in Zile right now... prototype (Editor {}) returns "Process". The problem with this approach is that I can also pass raw Process objects to the Editor apis that require the extra fields, but not find out about it until the interpreter crashes with a nil valued field access. This is clearly not scalable.
  2. std.Object._type can be a list, and setting _type in the table argument to object cloning appends that name onto the cloned object's _type list rather than overwriting it. Object.prototype returns the top value of that stack, and we add a new Object.isa method to check the whole list to see whether an object has the named type in its type list. This will solve my current issue, and adds a sort of single inheritance feature to std.object without much development work.
  3. Write an Interface api. An Interface is just an object that associates a name (Iterable, Indexable, etc) with a list of method calls that an object that provides the named interface must respond to. Interface methods must be added to an object first, and then are declared (added to an _interfaces set in the metatable of the object) with an api that checks the required methods calls are supported. Then at runtime, instead of rejecting anything that's not a Process object, I can accept anything that supports the Process interface, and both Process objects and the derived Editor objects will be accepted and will work properly, and the Editor only apis can detect accidental Process objects before the nil access errors.

I'm leaning towards (3) because it sidesteps any horrors with trying to support a multiple inheritance paradigm, and yet still provides all the advantages of multiple inheritance. And it's clearly much more powerful and useful than the other two options. It seems like the runtime overhead will be incredibly light, except at the point of checking that declared interfaces are properly supported by an object, which in turn can be mitigated by skipping those checks when _DEBUG == false -- or not implemented at all I suppose, though I quite like the safety of preventing declaration of interfaces that are not conformed to.

My worry is that (3) is over-engineering a solution, somewhat tempered by not wanting to make a release with support for (2) and then wanting to change my mind to (3) in a later release when (for example) Zile starts to have a proper object model in its implementation but (2) starts getting in the way of doing things elegantly.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions