This is laying the foundation to build a cleaning task that's
sort of database agnostic.
All calls are done by acting on a "DeletionRequest" so interpretation
of the config will go through a single point
So far the bind config only impacted the IRC connections.
However, nothing in our doc comment says that this is intentional.
> ### bind
> Set the local IP to bind to for outgoing connections.
This commit fixes the leak and uses it for all outgoing requests
as described by the docstring.
Add the ability to migrate our db in the upwards direction.
Use the facility to add primary keys to our messages table.
This should allow work like jumping to messages and the likes.
This also introduces the framework for rollback, without actually
hooking it up.
This should be easy enough to do when the need arises.
This enables db migrations to be undone, or "down migrated".
The down migration shouldn't be done automatically
as it could lead to severe data loss if that were done.
Hence, we still hard fail if we encounter a version lower than what
we have in the DB.
A CLI will be added in a later commit that allows users to explicitly
do that.
We should not mess with irc-framework internals.
Technically we shouldn't even access the connection object,
it's not part of the documented API surface
We want primary keys to never get re-used to so that we
can implement jump to messages / context fetching etc
in the future.
This isn't hooked up yet at all to the rest of the code, only
the schema is changed
This sets up the testing infrastructure to test migrations we are
doing.
It's done on a in memory database directly, we are only interested
in the statements themselves and it's easier than to try and
inject a prepared db into the store.
We do add some dummy data though to make sure we actually execute
the things as we expect.
Prior to this, the search is still racy but one tends to notice
this only when the DB is large or network is involved.
The user can initiate a search, get bored, navigate to another chan
issue a different search.
Now however, the results of the first search come back in and
hilarity ensues as we are now confused with the state.
To avoid this, keep track of the last search done and any result
that comes in that isn't equal to the active query is garbage and
can be dropped.
The only reason we accepted a client was that so we have access
to the next message id when we need it.
So let's accept an id provider function instead.
The interface should not contain things that aren't the API of the
storage interface.
Further, rename ISqliteMessageStorage to SearchableMessageStorage,
as that's also an implementation detail.
We'll never have a second sqlite backend, so the name seems
strange.
TL is stupid and doesn't wait for message{Provider,Storage} to
settle before it starts using the store.
While this should be fixed globally, we can hack around the problem
by pushing everything onto the call stack and hope that we'll eventually
finish the setup before we blow the stack.
Message stores are more complicated that a sync "fire and forget"
API allows for.
For starters, non trivial stores (say sqlite) can fail during init
and we want to be able to catch that.
Second, we really need to be able to run migrations and such, which
may block (and fail) the activation of the store.
On the plus side, this pushes error handling to the caller rather
than the stores, which is a good thing as that allows us to eventually
push this to the client in the UI, rather than just logging it in the
server on stdout