1. To see all the queries currently running:
vsql=> select user_name, session_id, transaction_id, statement_id, request, request_duration_ms, start_timestamp from query_requests where is_executing='t' order by start_timestamp desc;
"query_requests" is a system view built over few system tables, but this is only available in Vertica 6.0 and above. To get this output in older versions:
vsql=> select user_name, session_id, transaction_id, statement_id, query,query_duration_us, query_start from query_profiles where is_executing='t';
2. To get the run time for a particular query:
vsql=> select user_name, session_id, transaction_id, statement_id, request, request_duration_ms, start_timestamp from query_requests where transaction_id=< transaction id > and statement_id= < statement id >;
Or we can filter on particular request(s):
vsql=> select user_name, session_id, transaction_id, statement_id, request, request_duration_ms, start_timestamp from query_requests where request ilike '%my_table%' order by start_timestamp desc limit 10;
Corresponding queries for version 5.x:
vsql=> select ri.time as Start_Time, rc.time as End_Time,rc.time - ri.time as Query_Duration, ri.request as Query from dc_requests_issued ri, dc_requests_completed rc where ri.session_id = rc.session_id and ri.request_id = rc.request_id and ri.transaction_id=< transaction id > and ri.statement_id=< statement id >;
vsql=> select ri.time as Start_Time, rc.time as End_Time,rc.time - ri.time as Query_Duration, ri.request as Query from dc_requests_issued ri, dc_requests_completed rc where ri.session_id = rc.session_id and ri.request_id = rc.request_id and ri.request like '%my_table%' order by ri.time desc limit 10;
3. To see the resources acquired by running queries:
vsql=> select transaction_id, statement_id, thread_count, open_file_handle_count, memory_inuse_kb/1024 memory_inuse_mb from resource_acquisitions where is_executing='t' limit 10;
4. To see the run time of various execution engine operators:
vsql=> select operator_name, avg(counter_value) as Avg_Time_Per_Counter from dc_execution_engine_profiles where transaction_id= < transaction id > and statement_id= < statement id > and counter_name='execution time (us)' group by operator_name order by avg(counter_value) desc;
Once we have the run time values for all the operators, we can drill down a particular operator to see the individual counter values.
vsql=> select counter_name, avg(counter_value) as Avg_Counter_Value from dc_execution_engine_profiles where transaction_id= < transaction id > and statement_id= <statement id > and operator_name='Join' group by counter_name order by avg(counter_value) desc;
5. To see the delete row counts schema wise and projection wise:
vsql=> select schema_name, sum(total_row_count) AS Total_Row_Count, sum(deleted_row_count) AS Deleted_Row_Count FROM storage_containers group by schema_name order by sum(deleted_row_count) desc;
vsql=> select schema_name,projection_name,storage_type,sum(deleted_row_count) as Deleted_Row_Count from delete_vectors group by schema_name,projection_name,storage_type order by rows desc;
6. To see all the locks in the database and the associated session.
vsql=> select object_name as Object, lock_mode as Lock_Mode, lock_scope as Lock_Scope, session_id, substring(current_statement, 1, 30) || '...' as Statement from v_monitor.sessions s join v_monitor.locks l on to_hex(s.transaction_id) = substring(l.transaction_description, 6, 14);
7. To find the ROS container count for projection:
vsql=> select projection_name, node_name, count(distinct storage_oid) as ROS_Container_Count from storage_containers where projection_name= < projection name > group by 1,2 order by 1,2;
Vertica[l] Bits
HP Vertica Analytic Database: Technical problems and their solutions. This is a personal blog and is not related to and/or supported by Vertica Systems Inc.
Sunday 17 February 2013
Saturday 9 February 2013
Tuple Mover Mergeout, Replay Delete & Locks
Tuple Mover operation Mergeout is the process of consolidating ROS containers and purging deleted records. These ROS containers (called ROSs are actually files that are subset of Vertica's Read Optimized Store) are created by bulk loads and DML operations to the data stored within a projection. The Tuple Mover periodically merges ROS containers to maximize performance.
Mergeout works in two phases. First phase takes a U (Usage) lock that doesn't conflict with any other lock type request. Second phase takes a T (Tuple Mover) lock, which conflicts with transactions that require an X (Exclusive lock needed for delete operations) or O (Owner lock needed for Drop/Truncate) lock.
X & O locks are not allowed if the object already has a T lock (as a matter of fact O lock is not compatible with any lock and X lock is compatible only with U lock)
In such scenario, X & O lock requests would fail with an error like below:
ERROR: Locking failure: Timed out X locking Table:<<TABLE NAME>>.
T held by [user <user name> (Mergeout: (Table: <<SCHEMA NAME.TABLE NAME>>)
(Projection: <<PROJECTION NAME>>))]. Your current transaction isolation level is SERIALIZABLE
One common reason for an extended T lock duration is when Mergeout is doing the Replay Delete operation. Replay delete is the process of reconciling deleted records to determine if they should or shouldn't be included in to the new ROS containers (resulting from merging). This reconciliation is based on configuration parameter "HistoryRetentionTime", and it determines how long deleted data is saved (in seconds) as a historical reference. TM purges the records deleted prior to HistoryRetentionTime and records post this time are reorganized in new ROS containers. Replay delete is a rather slow operation in Vertica, since Vertica primarily is optimized for query intensive workloads.
But if a TM Mergeout is stuck in replay delete, you either have the option to wait for it to complete, or you will have to stop it to allow other operations to complete. To cancel a mergeout, we need to take the following steps:
1. Query table TUPLE_MOVER_OPERATIONS to get the associated SESSION_ID:
vsql=> select SESSION_ID from TUPLE_MOVER_OPERATIONS where IS_EXECUTING = True and PLAN_TYPE = 'Replay Delete';
2. Once we have the session id, we can use function CLOSE_SESSION to close the session and hence mergeout. e.g.
vsql=> select CLOSE_SESSION('myhost-12990:0x51c31');
(PS: Long running Replay delete can be avoided if we advance the AHM (function MAKE_AHM_NOW()) after any bulk deletes, this will make sure that Tuple Mover purges the deleted records rather than having to reorganize it.)
Mergeout works in two phases. First phase takes a U (Usage) lock that doesn't conflict with any other lock type request. Second phase takes a T (Tuple Mover) lock, which conflicts with transactions that require an X (Exclusive lock needed for delete operations) or O (Owner lock needed for Drop/Truncate) lock.
X & O locks are not allowed if the object already has a T lock (as a matter of fact O lock is not compatible with any lock and X lock is compatible only with U lock)
In such scenario, X & O lock requests would fail with an error like below:
ERROR: Locking failure: Timed out X locking Table:<<TABLE NAME>>.
T held by [user <user name> (Mergeout: (Table: <<SCHEMA NAME.TABLE NAME>>)
(Projection: <<PROJECTION NAME>>))]. Your current transaction isolation level is SERIALIZABLE
One common reason for an extended T lock duration is when Mergeout is doing the Replay Delete operation. Replay delete is the process of reconciling deleted records to determine if they should or shouldn't be included in to the new ROS containers (resulting from merging). This reconciliation is based on configuration parameter "HistoryRetentionTime", and it determines how long deleted data is saved (in seconds) as a historical reference. TM purges the records deleted prior to HistoryRetentionTime and records post this time are reorganized in new ROS containers. Replay delete is a rather slow operation in Vertica, since Vertica primarily is optimized for query intensive workloads.
But if a TM Mergeout is stuck in replay delete, you either have the option to wait for it to complete, or you will have to stop it to allow other operations to complete. To cancel a mergeout, we need to take the following steps:
1. Query table TUPLE_MOVER_OPERATIONS to get the associated SESSION_ID:
vsql=> select SESSION_ID from TUPLE_MOVER_OPERATIONS where IS_EXECUTING = True and PLAN_TYPE = 'Replay Delete';
2. Once we have the session id, we can use function CLOSE_SESSION to close the session and hence mergeout. e.g.
vsql=> select CLOSE_SESSION('myhost-12990:0x51c31');
(PS: Long running Replay delete can be avoided if we advance the AHM (function MAKE_AHM_NOW()) after any bulk deletes, this will make sure that Tuple Mover purges the deleted records rather than having to reorganize it.)
Monday 28 January 2013
Want to store frequently accessed data on a faster storage (SSD)? Storage Policies can help.
Vertica "Storage Locations", as Vertica users know are storage paths on disk that you specify to store data and temp files. Each Vertica cluster requires a location to store data, and another location to store catalog. These default storage locations (that we specify during Vertica install & database creation) must exist on all the nodes.
e.g.
/home/dbadmin/testdb/v_testdb_node0001_data
/home/dbadmin/testdb/v_testdb_node0001_catalog
Whenever we add data to database or perform any DML operation, the data is stored in these storage locations on disk. Vertica also supports adding new storage locations to each nodes, if we want to increase our storage capacity. The new storage location can be on Vertica nodes, or a local SAN.
So far so good, but till Vertica 6.0, we didn't have the option to selectively store any database object (database, schemas, tables) on any particular storage location. All the data added or modified was stored in the default storage location. Having this option/freedom is quite desirable when we have some business critical (frequently accessed) tables that we want to store on a faster (and not to mention costlier) SSD disk.
Vertica 6.1 release (code-name Bulldozer) lets us do this with two newly introduced storage concepts, viz. Storage Labels & Storage Policies.
Location Label: This lets us create labelled storage locations. A new optional parameter has been added to ADD_LOCATION function to create a storage location with descriptive label. These labelled storage location are used to define storage policies.
VSQL=> select add_location ('/home/dbadmin/SSD/schemas','v_testdb_node0001', 'data', 'SSD');
add_location
-------------------------------------
/home/dbadmin/SSD/schemas added.
(1 row)
The example above creates a new data storage location with label "SSD". We will use this label to identify storage location while creating storage policy.
Once we have added our faster storage location (identified by a label), it's time to create a Storage Policy to associate database objects with it. The newly introduced Vertica function that lets us do this is called "SET_OBJECT_STORAGE_POLICY". Once a storage policy exists, Vertica uses the labeled location as the default storage location for the object data. Storage policies let you determine where to store your critical data. One example I already discussed above. Only one storage policy can exist per database object.
VSQL=> select set_object_storage_policy ('SALES', 'SSD');
set_object_storage_policy
-----------------------------------
Default storage policy set.
(1 row)
Every time data is loaded and updated, Vertica checks to see whether the object has an associated storage policy. If it does, Vertica automatically uses the labeled storage location. If no storage policy exists for an object, or its parent entities, Vertica uses default storage algorithms on available storage locations.
A Storage Policy can be cleared using function "clear_object_storage_policy", specifying the object name associated with Storage Location.
VSQL=> select clear_object_storage_policy('SALES');
Tuesday 25 December 2012
COPY: How to skip first column that has a different delimiter
I had this very interesting problem sometime back wherein customer had a data file with first column delimiter different from rest of the columns. Customer wanted to load this file using COPY command and wanted to skip first column.
The data file looked like this
The first column of the data file has ~ as a field delimeter
The rest of the columns have pipe (|) as a filed delimeter
$ cat load.out
SkipMe1 ~ A1 | B1 | C1
SkipMe2 ~ A2 | B2 | C2
The table definition is below
CREATE TABLE t (
AColumn VARCHAR(10),
BColumn VARCHAR(10),
CColumn VARCHAR(10)
);
CREATE PROJECTION tp (
AColumn,
BColumn,
CColumn
)
AS SELECT * from t;
The data once loaded was supposed to be as shown below:
AColumn | BColumn | CColumn
------+-----------+-------
A1 | B1| C1
A2 | B2| C2
Skipping a column in COPY command is easy using "Filler", and so is specifying column delimiter. But I had never tried before if we can specify delimiter for a specific column and that too in conjunction with Filler option. So I tested the solution out and voila...
The COPY command I used is shown below. Note the use of "FILLER" and specific use of "DELIMITER" only for first column.
$ cat load.out | vsql -c "copy t(c1 FILLER varchar(10) delimiter '~',c2 FILLER varchar(10), c3 FILLER varchar(10),c4 FILLER varchar(10), AColumn as c2, BColumn as c3, CColumn as c4) from stdin direct";
The data file looked like this
The first column of the data file has ~ as a field delimeter
The rest of the columns have pipe (|) as a filed delimeter
$ cat load.out
SkipMe1 ~ A1 | B1 | C1
SkipMe2 ~ A2 | B2 | C2
The table definition is below
CREATE TABLE t (
AColumn VARCHAR(10),
BColumn VARCHAR(10),
CColumn VARCHAR(10)
);
CREATE PROJECTION tp (
AColumn,
BColumn,
CColumn
)
AS SELECT * from t;
The data once loaded was supposed to be as shown below:
AColumn | BColumn | CColumn
------+-----------+-------
A1 | B1| C1
A2 | B2| C2
Skipping a column in COPY command is easy using "Filler", and so is specifying column delimiter. But I had never tried before if we can specify delimiter for a specific column and that too in conjunction with Filler option. So I tested the solution out and voila...
The COPY command I used is shown below. Note the use of "FILLER" and specific use of "DELIMITER" only for first column.
$ cat load.out | vsql -c "copy t(c1 FILLER varchar(10) delimiter '~',c2 FILLER varchar(10), c3 FILLER varchar(10),c4 FILLER varchar(10), AColumn as c2, BColumn as c3, CColumn as c4) from stdin direct";
Sunday 23 December 2012
Changing row delimiter in VSQL
Consider a scenario when you export some data with VSQL and some of the string columns have new line characters as part of column value. Now if you want to load the data file to some other table using COPY statement, or may be parsing the data using some of your custom parser, the new line characters in string column are bound to pose problem.
One of the easiest and cleanest way to tackle this problem is to change VSQL row delimiter to some character other than new line. So that when you export data, the rows will be delimited by the character of your choice and hence your COPY command or your custom parser can tell new line characters in column value to row delimiter.
In the example below, record separator is changed from default \n to '~'. The fields are separated by '|' and records are separated by '~"
At command line:
Use -R option (along with -A for unaligned output) to specify field separator.
$ vsql -o my_table.out -c "select * from my_table" -A -R '~';
$ cat my_table.out
id|status~1001|t~1002|f~1003|t~1004|f~(4 rows)
At vsql prompt:
Use \pset to specify new record separator
vsql=> \pset recordsep '~'
Record separator is "~".
Set output format as unaligned
vsql=> \a
Output format is unaligned.
vsql=> \o my_table2.out
vsql=> select * from my_table;
vsql=> \o
Subscribe to:
Posts (Atom)