We encountered a query performance issue which took more than 10 minutes to complete a query from a table which has 100 thousands records. We are using Spring Data JPA framework. We thought the framework shouldn't be quite slow. 100,000 records is not too much. After google search, find some similar issues. From Oracle JDBC Developer's Guide (emphasis mine): By default, when Oracle JDBC runs a query, it retrieves a result set of 10 rows at a time from the database cursor. This is the default Oracle row fetch size value. You can change the number of rows retrieved with each trip to the database cursor by changing the row fetch size value.
Standard JDBC also enables you to specify the number of rows fetched with each database round-trip for a query, and this number is referred to as the fetch size. In Oracle JDBC, the row-prefetch value is used as the default fetch size in a statement object. Setting the fetch size overrides the row-prefetch setting and affects subsequent queries run through that statement object.
Solutionsif you are using PreparedStatement, call PreparedStatement::setFetchSize before call executeQuery . if you are using Spring JPA @Query , add @QueryHints to set fetch size. @QueryHints(@QueryHint(name="org.hibernate.fetchSize", value="100000"))
or via properties in application.yml spring: jpa: properties: hibernate: fetch_size: 100000
this setting might apply to all queries in your app, might cause memory consuming issues.
We used @QueryHints and call PreparedStatement::setFetchSize for the specific query will result in more than 100 thousands. We are now able to complete the query in 1 second. It is 660 times speed up comparing without the fetch size hint. |
No comments:
Post a Comment