and dialog line1 to message parameter rename.
The isPassword change likely bumps minimum version up to Kodi 18.
This can be worked around if desirable.
Note: This has a previously applied optimization of using a local cache.
Before:
```
458570 66.934 0.000 66.934 0.000 {method 'execute' of 'sqlite3.Cursor' objects}
246771 58.075 0.000 58.075 0.000 {method 'fetchone' of 'sqlite3.Cursor' objects}
```
After:
```
368883 66.220 0.000 66.220 0.000 {method 'execute' of 'sqlite3.Cursor' objects}
157084 58.160 0.000 58.160 0.000 {method 'fetchone' of 'sqlite3.Cursor' objects}
```
Once again, the number of calls to execute and fetchone are reduced but
the total time spent executing each is relatively stable.
Profiling has shown that there are many calls to sqlite3.execute and
fetchone which takes a signicant amount of time. A simple way of
reducing these is to swap the 2-stage init of table row data into a
unified add.
Applying this to add_set and add_country yielded these results:
Before changes
```
281784 7.054 0.000 7.054 0.000 {method 'execute' of 'sqlite3.Cursor' objects}
127443 1.114 0.000 1.114 0.000 {method 'fetchone' of 'sqlite3.Cursor' objects}
```
After changes
```
281714 7.492 0.000 7.492 0.000 {method 'execute' of 'sqlite3.Cursor' objects}
127373 1.217 0.000 1.217 0.000 {method 'fetchone' of 'sqlite3.Cursor' objects}
```
Note: The total time of fetchone has actually increased. I am hoping
this was an abnormality on my machine and the actual reduction in the
number of calls will permantly reduce this total time.
* Added profiling info
* Resort to the expensive database lookup only if the person exists in the
database.
* Prevent any access to the people database unless a person must be added.
* Bulk operations where possible.
* Prepare for a new install and the table not existing.
The recursive call to get_workers was implemented as it looked like the
original intent of the DTHREADS variable. Otherwise this variable was
redundant. The ThreadPool is a much better use of this setting.
Added ThreadPoolExecutor and used to process GET requests in multiple
threads which enables chunks of data to always be available for
processing. Processing of the data can happen as soon as the first chunk
arrives.
Refactored the code to help implement. The idea is the "params" are
built in batch and passed to the thread pool which get the actual
results.
Periodically release the locks on the database as the sync with the
server is being performed. This allows the UI to communicate with its
database and update the UI in real time.
Taken this oppurtunity to refactor some common code improve
maintainance.
Previously the user was able to set any number to limitThreads UI
component but internally that was reduced to be a maximum of 50 which is
deceiving to the user. Set this is set as a boundary in the UI.
This also fixes a bug that the user could set the number of threads to
zero which follows from GIGO; but let the UI assist by placing a min
number of threads to 1.
json.dumps is a processing intensive operation. This is being called
every time data is received from the server (most noticeably during
library updates) for debug logging. If the user has debug logging
disabled (the default option) then the user is still paying for
processing which is discarded.
The fix is to add a level of indirection where the dumps function is
only called if a string representation of the json is requested; ie.
when the debug string is evaluated.
json.dumps is a processing intensive operation. This is being called
every time data is received from the server (most noticeably during
library updates) for debug logging. If the user has debug logging
disabled (the default option) then the user is still paying for
processing which is discarded.
The fix is to add a level of indirection where the dumps function is
only called if a string representation of the json is requested; ie.
when the debug string is evaluated.