database - (DB/SQL) Performance-oriented way to manage map coordinates data -
i have function lets users either create new spot/marker on map using usual latitude & longitude paired value or modify existing spots. , these spots need saved table.
scenario: table holds 6 set of coordinates, retrieved on map. let's 2 of existing spots modified, 3 of them removed, , finally, 4 new spots added map.
now, being novice sql user, think of 2 approaches writing resulting coordinates database follows:
- remove existing data in table first, grab that's left on map , iterate through them , create each set of coordinates.
- update data modified spots. delete ones removed user. create new records new spots.
for simplistic scenario, i'd think option #1 requires 1 delete query, , 6 create queries, result in total of 7 queries need executed. on other hand, option #2 requires 3 delete queries, 2 update queries, , 4 create queries, comes total of 9 queries.
the whole point of posting question because i'm not sure kind of performance advantage or disadvantage should expect either approach when dataset becomes considerably large. either 1 of 2 approaches inherently better other due other reasons total number of required queries? what's conventional way of dealing scenario similar this?
is not going feasible scale up. if client modifies 3 points , table contains 3 trillion points? truncate entire table , insert 2.9 trillion points in? if had client b wants view map during time, they'll have wait while table come up. besides that, there higher risk when talk wiping table.
is more traditional, safer, , easier scale.
you can test both of these test data, , you'll able observe difference
note: it's not number of queries determines performance. it's queries attempting do, schema designed for, , dbms you're using. take @ r-trees. they're important efficient spatial queries. http://en.wikipedia.org/wiki/r-tree
Comments
Post a Comment