Rectangle 27 130

We are using DBS of 50 GB+ on our platform. no complains works great. Make sure you are doing everything right! Are you using predefined statements ? *SQLITE 3.7.3

  • Apply these settings (right after you create the DB) PRAGMA main.page_size = 4096; PRAGMA main.cache_size=10000; PRAGMA main.locking_mode=EXCLUSIVE; PRAGMA main.synchronous=NORMAL; PRAGMA main.journal_mode=WAL; PRAGMA main.cache_size=5000;

Recently tested with dbs in the 160GB range, works great as well.

PRAGMA main.temp_store = MEMORY;

You're storing 50GB of data in a sqlite database? Have you considered moving up to Postgres or mySQL (or Oracle even).

Don't just blindly apply these optimizations. In particular synchronous=NORMAL is not crash-safe. I.e., a process crash at the right time can corrupt your database even in the absence of disk failures. sqlite.org/pragma.html#pragma_synchronous

@Alex can you please explain those values and the difference between'em and default ones ?

What are the performance characteristics of sqlite with very large dat...

database performance sqlite
Rectangle 27 126

We are using DBS of 50 GB+ on our platform. no complains works great. Make sure you are doing everything right! Are you using predefined statements ? *SQLITE 3.7.3

  • Apply these settings (right after you create the DB) PRAGMA main.page_size = 4096; PRAGMA main.cache_size=10000; PRAGMA main.locking_mode=EXCLUSIVE; PRAGMA main.synchronous=NORMAL; PRAGMA main.journal_mode=WAL; PRAGMA main.cache_size=5000;

Recently tested with dbs in the 160GB range, works great as well.

PRAGMA main.temp_store = MEMORY;

Don't just blindly apply these optimizations. In particular synchronous=NORMAL is not crash-safe. I.e., a process crash at the right time can corrupt your database even in the absence of disk failures. sqlite.org/pragma.html#pragma_synchronous

@Alex can you please explain those values and the difference between'em and default ones ?

What are the performance characteristics of sqlite with very large dat...

database performance sqlite
Rectangle 27 125

We are using DBS of 50 GB+ on our platform. no complains works great. Make sure you are doing everything right! Are you using predefined statements ? *SQLITE 3.7.3

  • Apply these settings (right after you create the DB) PRAGMA main.page_size = 4096; PRAGMA main.cache_size=10000; PRAGMA main.locking_mode=EXCLUSIVE; PRAGMA main.synchronous=NORMAL; PRAGMA main.journal_mode=WAL; PRAGMA main.cache_size=5000;

Recently tested with dbs in the 160GB range, works great as well.

PRAGMA main.temp_store = MEMORY;

Don't just blindly apply these optimizations. In particular synchronous=NORMAL is not crash-safe. I.e., a process crash at the right time can corrupt your database even in the absence of disk failures. sqlite.org/pragma.html#pragma_synchronous

@Alex can you please explain those values and the difference between'em and default ones ?

What are the performance characteristics of sqlite with very large dat...

database performance sqlite
Rectangle 27 124

We are using DBS of 50 GB+ on our platform. no complains works great. Make sure you are doing everything right! Are you using predefined statements ? *SQLITE 3.7.3

  • Apply these settings (right after you create the DB) PRAGMA main.page_size = 4096; PRAGMA main.cache_size=10000; PRAGMA main.locking_mode=EXCLUSIVE; PRAGMA main.synchronous=NORMAL; PRAGMA main.journal_mode=WAL; PRAGMA main.cache_size=5000;

Recently tested with dbs in the 160GB range, works great as well.

PRAGMA main.temp_store = MEMORY;

Don't just blindly apply these optimizations. In particular synchronous=NORMAL is not crash-safe. I.e., a process crash at the right time can corrupt your database even in the absence of disk failures. sqlite.org/pragma.html#pragma_synchronous

@Alex can you please explain those values and the difference between'em and default ones ?

What are the performance characteristics of sqlite with very large dat...

database performance sqlite
Rectangle 27 12

sqlite supports something called as user_version property, you can execute PRAGMA user_version to query for current schema version of your app db. This query can happen write at the beginning when your app starts.

to update this user_version execute following update query PRAGMA user_version = version_num;

Whenever you create sqlite db, it is best practice to put this property user_version, so that when you are upgrading in future you can query current value. Check what it's needs to be and execute remaining alter or create tables to upgrade your schema version.

I first query user_version, check it's value and if it is 1, then you need to run alter script to add new column and set user version to 2.

In your case, since you haven't set the user_version before, it would be difficult to differentiate new install vs an upgrade scenario. So for now may be you assume if db is present it is upgrade scenario and execute alter scripts and if not present assume it is a new install scenario and run create scripts. But see if you can use above pragma to solve your problem in future atleast.

seems to be a good way to update and maintain database schema, i will surly try this.

ios - Updating new version to app store with different sqlite db struc...

ios sqlite3 app-store
Rectangle 27 7

A single connection instance and all of its derived objects (prepared statements, backup operations, etc.) may NOT be used concurrently from multiple goroutines without external synchronization.

(This is a different SQLite driver, but this restriction also applies to yours.)

By default, SQLite aborts immediately when it encounters a database that is locked by another transaction. To allow more concurrency, you can tell it to wait for the other transaction to finish by setting a busy timeout.

Use the BusyTimeout function, if your SQLite driver has it, or execute the PRAGMA busy_timeout SQL command directly.

golang sqlite database connection pooling - Stack Overflow

database sqlite error-handling go
Rectangle 27 3

Just for completeness, since MySQL and Postgres have already been mentioned: With SQLite, use "pragma table_info()"

sqlite> pragma table_info('table_name');
cid         name        type        notnull     dflt_value  pk        
----------  ----------  ----------  ----------  ----------  ----------
0           id          integer     99                      1         
1           name                    0                       0

mysql - What is the SQL command to return the field names of a table? ...

sql mysql sql-server database
Rectangle 27 3

Just for completeness, since MySQL and Postgres have already been mentioned: With SQLite, use "pragma table_info()"

sqlite> pragma table_info('table_name');
cid         name        type        notnull     dflt_value  pk        
----------  ----------  ----------  ----------  ----------  ----------
0           id          integer     99                      1         
1           name                    0                       0

mysql - What is the SQL command to return the field names of a table? ...

sql mysql sql-server database
Rectangle 27 0

SQLite also supports a pragma statement called "table_info" which returns one row per column in a table with the name of the column (and other information about the column). You could use this in a query to check for the missing column, and if not present alter the table.

Your answer would be much more excellent were you to provide the code with which to complete that search instead of just a link.

PRAGMA table_info(table_name). This command will list each column of the table_name as a row in the result. Based on this result, you can determine if the column existed or not.

ALTER TABLE ADD COLUMN IF NOT EXISTS in SQLite - Stack Overflow

sqlite alter-table
Rectangle 27 0

PRAGMA foreign_keys = on;

ATTACH DATABASE 'db1.sqlite' AS db1;

ATTACH DATABASE 'db2.sqlite' AS db2;

BEGIN;

CREATE TABLE Fruit      (
                          id            INTEGER PRIMARY KEY NOT NULL,
                          name          TEXT    UNIQUE ON CONFLICT IGNORE
                          )
                          ;

CREATE TABLE Juice      (
                          id            INTEGER PRIMARY KEY NOT NULL,
                          name          TEXT    UNIQUE ON CONFLICT IGNORE
                        )
                        ;

CREATE TABLE Recipe     (
                          id            INTEGER PRIMARY KEY NOT NULL,
                          juice_id      INTEGER NOT NULL,
                          fruit_id      INTEGER NOT NULL,
                          FOREIGN KEY   ( juice_id ) REFERENCES Juice ( id )
                                        ON UPDATE CASCADE
                                        ON DELETE CASCADE,
                          FOREIGN KEY   ( fruit_id ) REFERENCES Fruit ( id )
                                        ON UPDATE CASCADE
                                        ON DELETE CASCADE
                        )
                        ;


INSERT INTO Fruit  ( id, name )               SELECT id, name FROM db1.Fruit;
INSERT INTO Juice  ( id, name )               SELECT id, name FROM db1.Juice;
INSERT INTO Recipe ( id, juice_id, fruit_id ) SELECT id, juice_id, fruit_id FROM db1.Recipe;

INSERT INTO Fruit ( name ) SELECT name FROM db2.Fruit;
INSERT INTO Juice ( name ) SELECT name FROM db2.Juice;

CREATE TEMPORARY TABLE Recipe_tmp AS
                                    SELECT Juice.name AS j_name, Fruit.name AS f_name
                                      FROM db2.Recipe, db2.Fruit, db2.Juice
                                        WHERE db2.Recipe.juice_id = db2.Juice.id AND db2.Recipe.fruit_id = db2.Fruit.id
;

INSERT INTO Recipe ( juice_id, fruit_id ) SELECT j.id, f.id
                                            FROM Recipe_tmp AS r, Juice AS j, Fruit AS f
                                              WHERE r.j_name = j.name AND r.f_name = f.name
;


DROP TABLE Recipe_tmp;

COMMIT;

DETACH DATABASE db1;
DETACH DATABASE db2;

WOW. I will happily accept this as answer, provided it works, as it is closest to complete solution (though sadly crude=iter). Sure it is raw sql but might get to be faster than programmed solutions, and some might even like it this way better. I wonder about two things : problems with fks when there already are nonunique vals... And since i have also indexes on foregin keys, i presume i'd need to recreate them for sake of completness to above answer? Hmm, and another, last thing. Insert into Recipe from db1. Why do You take also id, isn't it better to allow sqlite to renumerate ?

So it did mostly work - by that i mean, if anything did not work it was my fault, as I needed to tweak it also to my schema. What else stumblings i had - which may be lesson to anyone else? Actually, if You dont pass any argument to sqlite3 it operates on memory, which may/maynot be faster, but to "save" to file you need to do .backup filename at the end of script. Also, You can pass this script as "whole", or execute ".read scriptname" Anyway - I've learned much. Thank You @josepn.

@user3264463 - It is an exact copy of the first database. With If this is not so important that it is better to allow the growing sqlite id

sql - Sqlite merging databases into one, with unique values, preservin...

sql database sqlite merge
Rectangle 27 0

SQLite allocates and keeps a bunch of memory, which is only freed when the database is closed. You can also adjust how much memory it will allocate by issuing a 'pragma cache_size = nnn' command.

objective c - iOS - FMDB usage and memory - Stack Overflow

objective-c ios sqlite fmdb
Rectangle 27 0

I don't know how large are your data sets, but if they are not too big, then the WAL mode might help. You may experiment with "PRAGMA synchronous=1": This setup does not sync after each transaction, but once in a while. (Default: When 2 MB of new data is written.) SQLite docs say that you might loose a few recent transactions, but the DB won't be corrupted.

Options for high performance SQLite - Stack Overflow

performance sqlite embedded
Rectangle 27 0

I'm not an expert on SQLite (in fact I have no practical experience with it) but I would imagine that whether or not your IDENTITY fields automatically increment with your insert has something to do more with the definition of the table than with the connection string you are using.

Be reminded that you cannot increment a FOREIGN KEY per se, but rather you need to increment the table with the IDENTITY field that serves as the PRIMARY KEY that the FOREIGN KEY then references.

Can you add the definition of the table that you are having trouble with?

this is just what I do. My table has an Id as PK + Autoincrement enabled. But for another table it works hm... I use 3.7.7.1. Hm... I am using TransactionScope instead of SQLIteTransaction and the Transaction + DbTransaction property of the SQLiteCommand object are NULL. Maybe there is a problem :/ gotta check that first using SQLiteTransaction.

If it works for another table then it should have nothing to do with your connection string (but shouldn't have anything to do with connection string anyway!). I would recommend that you post the SQL you used to create the working and non-working tables, as well as the code that actually performs the INSERTs that aren't incrementing your identity field.

stop! ;-) for the other table it did not work when I also did multiple inserts. I just faked another insert. Either the connection attribute is not working or its due to the TransactionScope. I check that out tomorrow see ya!

ok as I said I will do more tests. I found out the Foreign Keys connection attribute works ;-) by doing that I found the failure 20 inch before the screen :p I passed the commandInsert instead of the commandSelectLastAutoIncId somewhere... therefore I got bad results. Now everything works fine :)

Foreign Keys auto increment does not work in sqlite .NET provider - St...

.net sqlite foreign-keys connection-string pragma
Rectangle 27 0

we don't have an explicit API for event "removal" at this time though this is a feature that will eventually be available. So you'd need to create a single event that itself turns itself on and off based on a flag, seems like that's what you've already worked out.

And just to clarify: Since this event happens on a new connection, what is the quickest way to force a new connection? session.commit()? I would like to do this from as high up as possible.

if you have NullPool in use then sure

SQLite PRAGMA foreign_keys with SQLAlchemy - Stack Overflow

sqlite sqlalchemy
Rectangle 27 0

Your solution can lay in sqlite itself. The schema is by default not writable (you can not UPDATE sqlite_master table) but with sane knowledge and a little help of PRAGMA writable_schema=ON; you can do this. So some changes are safe, for example changing VARCHAR(N) to VARCHAR(M), sqlite doesn't care about the number in the brackets.

CREATE TABLE [TestTable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT,
[Txt] VARCHAR(30)
)

The line below allows sqlite_table changes

PRAGMA writable_schema=ON;

And the following statement will change the limits to 100

Update sqlite_master set sql='CREATE TABLE [TestTable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT,
[Txt] VARCHAR(100)
)' where tbl_name='TestTable' and type='table'

But you should be aware of what you're doing since some changes are no welcome because sqlite expects some storage format based on the information in the schema. Varchar to varchar conversion does not change the storage format

This does not make any sense with sqlite. Unlike most SQL databases, SQLite does not restrict the type of data that may be inserted into a column based on the columns declared type. Instead, SQLite uses dynamic typing. The declared type of a column is used to determine the affinity of the column only. Such tricks to fake the table definition won't work with sql requests, for instance.

@Arnaud_Bouchez, OP quote: "All characters stored beyond this 30 character limit are inaccessible to me". This trick solved exactly this problem since the code that parses sqlite schema (the only way to do size-limited typing) now will be more than happy to show all the characters below 100

Question title is explicitly about "Delphi support for SQLite manifest typing", and the truncation is just a small part of the problem. TDataSet based SQLite3 units (like UniDac or even the latest DBExpress driver supplied with XE3) will simply be unable to use this feature. OP question is about using this feature in Delphi, not only fix the text field truncation. The problem is not in SQLite3, but on the Delphi side, how the SQlite3 APIs are called and returned data is processed. That's why your answer is true, but sadly out of scope.

Delphi support for SQLite manifest typing - Stack Overflow

delphi sqlite data-type-conversion devart
Rectangle 27 0

Your code seems to be assuming that the database already exists. Are you trying to open an existing, non-encrypted database and then encrypt it using SQLCipher? If so, what you are doing will not work.

The sqlite3_key function does not encrypt an existing database. If you want to encrypt an existing database you'd either need to ATTACH a new encrypted database and move data between the two, as described here:

Or, using the SQLCipher 2 you can use sqlcipher_export which provides an easy way to move data between databases.:

ios - Iphone/ipad : I got "a file is encrypted or is not a database" e...

iphone ios ipad sqlite3 sqlcipher
Rectangle 27 0

No, if configured right, SQLite preserves data integrity in this situation. "NO ACTION" is used by default and this prohibits deletion or update of a master key if there is still a refering key from an referencing table (tested with 3.7.x). My fault was that I was not aware that PRAGMA foreign_keys = ON; must be configured for every new connection to the database.

sqlite3 - Does SQLite really not preserve data integrity of foreign ke...

sqlite sqlite3
Rectangle 27 0

Interesting post script. The Vacuum statement in SQLite copies the entire database to a temp file for rebuilding. If you plan on doing this "On Demand" via user or some process, it can take a considerable amount of disk space and time to complete once your database gets above 100MB, especially if you are looking at several GB. In that case, you are better off using the AUTO_VACUUM=true pragma statement when you create the database, and just deleting records instead of running the VACUUM. So far, this is the only advantage I can find that SQL Server Compact has over SQLite. On demand SHRINK of the Sql Server Compact database is extremely fast compared to SQLite's vacuum.

Have not had the time to try it yet, but this pragma looks nice. However, since this is not the default, one should look at the exact behaviour here: sqlite.org/pragma.html#pragma_auto_vacuum

Also, the arguments may not be simply "true" or "false", but rather "0,1,2,NONE,FULL,INCREMENTAL".

c# - What is the Method for Database CleanUp in SQlite? - Stack Overfl...

c# sqlite sqlite3 ado.net
Rectangle 27 0

I partially solved my problem. For anyone interested in an answer to this, this link will be helpful. In short, turning off journal mode in sqlite ("pragma journal_mode=OFF") improves the insert performance significantly (almost four times the previous speed in my case) to the cost of making the code prone to data loss in case of unexpected shutdown.

As for the normal insert speed, it is way faster than 2ms/operation. It can reach as high as hundreds of thousands of insert operations per second using the right pragma instructions, making best use of transactions, etc.

performance - What is the normal insert time in a medium size database...

database performance sqlite