Database Benchmark: Unterschied zwischen den Versionen

Aus Geometa Lab OST
Zur Navigation springen Zur Suche springen
KKeine Bearbeitungszusammenfassung
Zeile 1: Zeile 1:
See also
Synonym: Database Contest.
* [[HSR Texas Geo Database Benchmark]]
* [http://wiki.hsr.ch/Datenbanken/wiki.cgi?SeminarDatenbanksystemeHS1112 Seminar Database Systems Autumn 2011/2012], Master of Science in Engineering, HSR.


== About Database Performance Benchmarking... ==
See also: "Performance-Vergleich von PostgreSQL, SQLite, db4o und MongoDB" [http://wiki.hsr.ch/Datenbanken/wiki.cgi?SeminarDatenbanksystemeHS1112 Seminar Database Systems Autumn 2011/2012], Master of Science in Engineering, HSR.
 
== Spatial Database Benchmarks
* [[HSR Texas Geo Database Benchmark]] - Comparing PostGIS,
* Spatial Overlay/Clipping:
** "ArcGIS vs QGIS etc Clipping Contest Rematch revisited": [http://courses.neteler.org/arcgis-vs-qgis-etc-clipping-contest-rematch-revisited/]
** Clipping Contest with Spatialite: [https://www.gaia-gis.it/fossil/libspatialite/wiki?name=benchmark-4.0]
 
See also:
* Comparing Lucene/Solr with PostGIS [http://wiki.hsr.ch/Datenbanken/SeminarDBS1ThemaWolski Seminar Database Systems Autumn 2013/2014], Master of Science in Engineering, HSR.
 
== About Database Performance Benchmarking ==


Existing DB-Benchmarks:
Existing DB-Benchmarks:

Version vom 6. Januar 2014, 21:16 Uhr

Synonym: Database Contest.

See also: "Performance-Vergleich von PostgreSQL, SQLite, db4o und MongoDB" Seminar Database Systems Autumn 2011/2012, Master of Science in Engineering, HSR.

== Spatial Database Benchmarks

See also:

About Database Performance Benchmarking

Existing DB-Benchmarks:

  • TPC-C for OLTP benchmarks.
  • TPC-R & TPC-H (formerly TPC-DS) for data warehouse & decision support systems.
  • TPC-W benchmark for Web-based systems.
  • "The Engineering Database Benchmark".
  • Open Source Database Benchmark
  • PolePosition open source database benchmark [3]

Guidelines

A database performance benchmark hast to consider following different aspects:

  1. Cold and warm start (beware that in case of warm start chaching will take place!).
  2. Equality and Range Queries.
  3. Query-Result Sets, which respond with one tupel or which respond more than half of the tupels in the dataset.
  4. Single User versus Multi-user.

Software (Scritps) for benchmark automation:

Weblinks / References