The framework may emit messages which do not have a time stamp.
We tried to unconditionally convert the time field, fix that.
The Msg constructor replaces falsey time fields with the current
date so we can also remove the duplication from that codepath.
The publicClient interface is utterly horrific.
It allows any client to inject arbitrary events into the socket.io
event stream.
This should get wrapped into a "plugin" event so that it can get properly
typed, better yet, this should get removed completely.
This is laying the foundation to build a cleaning task that's
sort of database agnostic.
All calls are done by acting on a "DeletionRequest" so interpretation
of the config will go through a single point
So far the bind config only impacted the IRC connections.
However, nothing in our doc comment says that this is intentional.
> ### bind
> Set the local IP to bind to for outgoing connections.
This commit fixes the leak and uses it for all outgoing requests
as described by the docstring.
Add the ability to migrate our db in the upwards direction.
Use the facility to add primary keys to our messages table.
This should allow work like jumping to messages and the likes.
This also introduces the framework for rollback, without actually
hooking it up.
This should be easy enough to do when the need arises.
This enables db migrations to be undone, or "down migrated".
The down migration shouldn't be done automatically
as it could lead to severe data loss if that were done.
Hence, we still hard fail if we encounter a version lower than what
we have in the DB.
A CLI will be added in a later commit that allows users to explicitly
do that.
We should not mess with irc-framework internals.
Technically we shouldn't even access the connection object,
it's not part of the documented API surface
We want primary keys to never get re-used to so that we
can implement jump to messages / context fetching etc
in the future.
This isn't hooked up yet at all to the rest of the code, only
the schema is changed
This sets up the testing infrastructure to test migrations we are
doing.
It's done on a in memory database directly, we are only interested
in the statements themselves and it's easier than to try and
inject a prepared db into the store.
We do add some dummy data though to make sure we actually execute
the things as we expect.
Prior to this, the search is still racy but one tends to notice
this only when the DB is large or network is involved.
The user can initiate a search, get bored, navigate to another chan
issue a different search.
Now however, the results of the first search come back in and
hilarity ensues as we are now confused with the state.
To avoid this, keep track of the last search done and any result
that comes in that isn't equal to the active query is garbage and
can be dropped.