Compatibility
The Invantive implementation of SQL is based upon ANSI SQL, extended by aspects from popular SQL implementations such as PostgreSQL, MySQL, Oracle, Teradata and Microsoft SQL Server. It is topped of with Invantive-specific extensions, especially for procedural SQL, distributed SQL and distributed transactions. The basis is to implement functions such that as little as possible changes are necessary to run a SQL statement originating from another SQL implementation on Invantive UniversalSQL. For instance, to retrieve the current time you can use 'sysdate', 'now', 'getdate()' and 'sysdatetime' to name a few. The same holds for the procedural extension Invantive Procedural SQL, which reflects SQL/PSM and makes it easy to port Oracle PL/SQL or PostgreSQL PL/pgSQL statements.
Important Safety and Usage Information
Intended Use and Limitations: This software, developed by Invantive, is designed to support a variety of business and information technology data processing functions, such as accounting, financial reporting an sales reporting. It is important to note that this software is not designed, tested, or approved for use in environments where malfunction or failure could lead to life-threatening situations or severe physical or environmental damage. This includes, but is not limited to:
- Nuclear facilities: The software should not be used for operations or functions related to the control, maintenance, or operation of nuclear facilities.
- Defense and Military Applications: This software is not suitable for use in defense-related applications, including but not limited to weaponry control, military strategy planning, or any other aspects of national defense.
- Aviation: The software is not intended for use in the operation, navigation, or communication systems of any aircraft or air traffic control environments.
- Healthcare and Medicine Production: This software should not be utilized for medical device operation, patient data analysis for critical health decisions, pharmaceutical production, or medical research where its failure or malfunction could impact patient health.
- Chemical and Hazardous Material Handling: This software is not intended for the management, control, or operational aspects of chemical plants or hazardous material handling facilities. Any malfunction in software used in these settings could result in dangerous chemical spills, explosions, or environmental disasters.
- Transportation and Traffic Control Systems: The software should not be used for the control, operation, or management of transportation systems, including railway signal controls, subway systems, or traffic light management. Malfunctions in such critical systems could lead to severe accidents and endanger public safety.
- Energy Grid and Utility Control Systems: This software is not designed for the control or operation of energy grid systems, including electrical substations, renewable energy control systems, or water utility control systems. The failure of software in these areas could lead to significant power outages, water supply disruptions, or other public utility failures, potentially endangering communities and causing extensive damage.
- Other High-Risk Environments: Any other critical infrastructure and environments where a failure of the software could result in significant harm to individuals or the environment.
User Responsibility: Users must ensure that they understand the intended use of the software and refrain from deploying it in any setting that falls outside of its designed purpose. It is the responsibility of the user to assess the suitability of the software for their intended application, especially in any scenarios that might pose a risk to life, health, or the environment.
Disclaimer of Liability: Invantive disclaims any responsibility for damage, injury, or legal consequences resulting from the use or misuse of this software in prohibited or unintended applications.
Distributed SQL, Databases and Data Containers
It is easy to exchange and/or combine data across the supported platforms with data. To each platform (such as Salesforce or Exact Online Belgium) multiple connections can be active with the same or different platform-specific connection settings. Each open connection to a platform is named a 'data container'.
All opened connections together are named a 'database'.
When multiple data containers have been opened, each one has an alias to refer it by in Invantive UniversalSQL statements. For instance, a connection can be open for two different customer accounts on Exact Online Netherlands aliased as 'eolnl_comp1' and 'eolnl_comp55') and one for an Exact Online Belgium custom, aliased as 'eolbe_my_new_company'. The aliases can be freely chosen as long as they are valid identifiers and defined in the databases configuration file 'settings.xml'.
Service Providers
A number of special connections are always made, each of which can occur at most once. These are the 'service providers' such as:
- 'datadictionary': metadata of the current database, such as list of tables and executed SQL statements performance.
- 'os': information on the operating system running the SQL engine, such as reading file contents.
- 'smtp': synchronously send mails through SMTP.
Partitioning
Especially online platforms have a multi-tenant structure, in which the data is partitioned per customer, company or person. When the data model is identical across tenants, Invantive UniversalSQL considers them 'partitions'. SQL statements can run across multiple or one partitions, often in parallel. This enables consolidation scenarios across partitions (such as Exact Online or Nmbrs companies) as well as high-performance in MPP environments.
The partitions to be used can be specified with the 'use' statement, either through an explicit list of partitions to be selected across data containers, or through a SQL select statement returning the list of partitions to use. Please note that although the 'use' statement resembles the 'use DATABASE' statement on Microsoft SQL Server or PostgreSQL you can on Invantive UniversalSQL have multiple partitions active at the same time in one user session.
Identifiers
For identifiers, the regular conventions hold for the set of allowed characters. Depending on the platform, the identifiers are case sensitive or not. In general, it is best to assume that the identifier are case insensitive. There is no length limit on an identifier imposed by Invantive UniversalSQL.
Procedural SQL
Invantive Procedural SQL (or "PSQL" for short) is a procedural extension on top of Invantive UniversalSQL. It is based on the ISO-standard 9075-4:2016 (SQL/PSM) and extends Invantive UniversalSQL with procedural options like blocks, variables, conditional execution and loops. The procedural code is - together with the Invantive UniversalSQL contained - as a whole into pseudo-code and then executed.
The procedural code does not lean on the procedural options of the platforms being used, so it is easy to retrieve and change data in all supported cloud, file and database platforms. The pre-compiled procedural code does not perform context switches between procedural and SQL logic.
Licensing
Features
The available functionality of Invantive UniversalSQL features is based upon the licensed features. For instance the free implementation of Invantive UniversalSQL is limited to 15.000 rows and no access to group functions. Please consult the data dictionary contents for your license features.
Usage Fees
For paid products, the fee depends on a number of factors: users, devices and billable partitions. All fees depend on actual use. Additionally, the number and volume in KB of reads and writes is registered to enable communication with the platform partners on the performance of their platform.
A rough estimate is that the billable partitions reflect the product of number of legal entities times number of source systems. The number of billable partitions is determined as follows, where using twice or more often the same billable partition doesn't influence the number:
- Accessing a provider which does not have a concept of "partition" typically counts as one partition per different connection. A so-called data container ID is determined which reflects the backend connected to. For instance, connecting to my-ftp-host.com and another-ftp-host.com counts as two partitions. Examples of such providers are FTP, SQL Server and cbs.nl.
- Accessing a provider which has a concept of "partition" counts as the number of partitions used during the connection. For instance, using the Exact Online company 234 and 456 counts as two partitions. Examples of such providers are Exact Online, NMBRS, Loket and XML Audit File Financieel.
- Accessing data through xmltable, csvtable, internettable, jsontable and exceltable counts as the number of different parameter combination used. For instance, accessing an Excel range 'Sheet1' and a named range 'mydata' counts as two partitions.
settings.xml
The file settings.xml defines for a user or program the list of defined databases. Databases are grouped in 'database groups' for visual display. Database groups have no further functionality. Each database consists of one or multiple data containers.
The file 'settings.xml' is most often found on Microsoft Windows in your '%USERPROFILE%\invantive' folder, such as 'c:\users\john.doe\invantive\settings.xml'. It is shared across all Invantive UniversalSQL product installations for the user.
There are many scenarios to share database specifications across a user community, such as WAN-scenarios with Invantive Web Service, large corporate scenarios using DNS-entries as well as file shares, included files as well as single user solutions. Please involve a consultant when you want to deploy across thousands of users or more.
For user communities of up to 10 users, we recommend that company-specific settings are grouped per role in a separate file named 'settings-ROLE.xml' and placed in the default folder. Invantive UniversalSQL will automatically merge these files in the main settings.xml file.
Group Functions
The Invantive implementation of SQL is based upon ANSI SQL, extended by aspects from popular SQL implementations such as PostgreSQL, MySQL, Oracle, Teradata and Microsoft SQL Server. It is topped of with Invantive-specific extensions, especially for distributed SQL and distributed transactions. The basis is to implement functions such that as little as possible changes are necessary to run a SQL statement designed for use with another SQL implementation on Invantive UniversalSQL. For instance, to retrieve the current time you can use 'sysdate', 'now', 'getdate()' and 'sysdatetime' to name a few.
Popular group functions such as 'stddev' are available. However, currently you can not combine in one unnested SQL statement both group functions as well as expressions on the variables. In that case use an inner (nested) SQL statement to apply the expressions on the data, and execute the group functions in the outer SQL statement with the syntax 'select group() from ( select ... from ... )'.
Locking
An Invantive UniversalSQL statement can work with many traditional and online platforms. There are no locking features on data and objects, since few online and traditional platforms connected provide these and the typical use of distributed transactions leave even less opportunity for data and object locking.
Transactions
Invantive UniversalSQL has limited support for transactions. DML is forwarded to a platform and depending on the platform an error can cause part of the work to be registered or everything to be rolled back. Within the SQL engine, multiple changes can be collected and forwarded to the platform at once. For instance, when creating an EDIFACT message you need to combine an invoice header with invoice lines into one EDIFACT message. Collection of multiple changes is done using the 'identified by' and 'attach to' syntax, optionally preceded by 'begin transaction'.
Format Masks
Operations such as to_char and to_date support a limited number of format masks:
- YYYY: 4-digit year.
- YYYYMM: 4-digit year, plus 2-digit number month within year.
- YYYYMMDD: 4-digit year, 2-digit month number within year, plus 2-digit day number within month.
- YYYYMMDDHH24MI: 4-digit year, 2-digit month number within year, 2-digit day number within month, 2-digit 24-hour clock hour number, plus 2-digit minutes number within hour.
- YYYYMMDDHH24MISS: 4-digit year, 2-digit month number within year, 2-digit day number within month, 2-digit 24-hour clock hour number, 2-digit minutes number within hour, plus 2-digit seconds number within minute.
- YYYY-MM-DD: 4-digit year, dash ('-'), 2-digit month number within year, dash ('-'), plus 2-digit day number within month.
- YYYY-MM-DD HH24:MI:SS: 4-digit year, dash ('-'), 2-digit month number within year, dash ('-'), plus 2-digit day number within month, 2-digit hour on 24 hour clock, colon (':'), 2-digit minutes within hour, colon (':'), 2-digit seconds within minutes.
- YYYY/MM/DD: 4-digit year, slash ('/'), 2-digit month number within year, slash ('/'), plus 2-digit day number within month.
- YYYY/MM/DD HH24:MI:SS: 4-digit year, slash ('/'), 2-digit month number within year, slash ('/'), plus 2-digit day number within month, 2-digit hour on 24 hour clock, colon (':'), 2-digit minutes within hour, colon (':'), 2-digit seconds within minutes.
- MMDDYY: 2-digit month number within year, 2-digit day number within month, plus 2-digit year.
- MMDDYYYY: 2-digit month number within year, 2-digit day number within month, plus 4-digit year.
- DDMMYYYY: 2-digit day number within month, 2-digit month number within year, 4-digit year.
- DD-MM-YY: 2-digit day number within month, dash ('-'), 2-digit month number within year, dash ('-'), 2-digit year.
- DD-MMM-YY: 2-digit day number within month, dash ('-'), short month name, dash ('-'), 2-digit year.
- DD/MM/YY: 2-digit day number within month, slash ('/'), 2-digit month number within year, slash ('/'), 2-digit year.
- DD/MMM/YY: 2-digit day number within month, slash ('/'), short month name, slash ('/'), 2-digit year.
- DD-MM-YYYY: 2-digit day number within month, dash ('-'), 2-digit month number within year, dash ('-'), 4-digit year.
- DD-MMM-YYYY: 2-digit day number within month, dash ('-'), short month name within year, dash ('-'), 4-digit year.
- DD/MM/YYYY: 2-digit day number within month, slash ('/'), 2-digit month number within year, slash ('/'), 4-digit year.
- DD/MMM/YYYY: 2-digit day number within month, slash ('/'), short month name within year, slash ('/'), 4-digit year.
- DD-MM-YYYY HH24:MI:SS: 2-digit day number within month, dash ('-'), 2-digit month number within year, dash ('-'), 4-digit year, space (' '), 2-digit hour on 24 hour clock, colon (':'), 2-digit minutes within hour, colon (':'), 2-digit seconds within minutes.
- DD/MM/YYYY HH24:MI:SS: 2-digit day number within month, slash ('/'), 2-digit month number within year, slash ('/'), 4-digit year, space (' '), 2-digit hour on 24 hour clock, colon (':'), 2-digit minutes within hour, colon (':'), 2-digit seconds within minutes.
- MM/DD/YYYY HH24:MI:SS: 2-digit month number within year, slash ('/'), 2-digit day number within month, slash ('/'), 4-digit year, space (' '), 2-digit hour on 24 hour clock, colon (':'), 2-digit minutes within hour, colon (':'), 2-digit seconds within minutes.
- DD/MMM/YYYY HH24:MI:SS: 2-digit day number within month, slash ('/'), short month name within year, slash ('/'), 4-digit year, space (' '), 2-digit hour on 24 hour clock, colon (':'), 2-digit minutes within hour, colon (':'), 2-digit seconds within minutes.
- YYYYIW: 4-digit year, plus ISO week number.
- YYYY-IW: 4-digit year, dash ('-'), plus ISO week number.
- IW: ISO week number.
- D: day of week according to ISO 8601.
- MMMM DD, YYYY HH24:MI TT: long month, 2-digit day number within month, comma (','), space (' '), 4-digit year number, space (' '), 2-digit hour on 24 hour clock, colon (':'), 2-digit minute within hour, space (' '), time zone.
Release History
The following significant changes were made to the Invantive UniversalSQL grammar:
- Release 17.29 (BETA) adds:
- 20181031: allow named expressions as table function parameters, using => as association operator.
- 20181123: add WHEN EXCEPTION to PSQL BEGIN..END block, with exception being WHEN OTHER or WHEN exception name.
- 20181123: extend SYNCHRONIZE statement.
- 20181224: extend SYNCHRONZIE statement with IGNORE NULLS clause.
- Release 17.30 (Production): All changes sofar.
- Release 17.31 (BETA) adds:
- 20190116: addition of language code parameter to TRANSLATE_RESOURCE.
- 20190117: addition of IS_NUMBER function.
- 20190124: addition of TO_BOOLEAN, IS_BOOLEAN, IS_DATE, IS_GUID functions.
- Release 17.32 (Production): All changes sofar.
- Release 17.33 (BETA) adds:
- 20190131: texts as partition names .
- 20190310: expression as billing id and reference.
- 20190322: select all columns 'except' syntax.
- 20190401: months_between SQL function.
- 20190401: excel_day SQL function.
- 20190403: sqlrowcount, sqlerrm and sqlcode functions.
- 20190410: 'exists' expression.
- 20190601: 'when obsolete within' variant of 'alter persistent cache refresh'.
- 20190605: gzip and ungzip functions.
- 20190624: 'alter session set roles' syntax for connectors with row-level security.
- 20190701: ascii_to_blob and unicode_to_blob functions.
- 20190706: 'alter session set iuid source' syntax.
- 20190821: add htmltable.
- 20191007: add 'except' to synchronize statement for insert and update.
- 20191008: add 'ignore changes to' to synchronize statement for update.
- 20191209: multiple PSQL batches in one statement.
- 20200102: add internettable.
- Release 17.34 (Production): All changes sofar.
A SQL batch consists of one or more Invantive UniversalSQL and/or PSQL statements separated by the semi-colon batch separator (';').
no references
A number of SQL and PSQL statements can be used to compose a batch.
::= sqlStatement
referenced by:
referenced by:
An Invantive UniversalSQL statement can retrieve or exchange data for many traditional and online platforms. Many platforms also support the use of DML (Data Manipulation Language) statements to change the data contained. On a few platforms you can execute DDL (Data Definition Language) statements to create new data structure or objects such as tables, procedures or sequences.
Popular Invantive UniversalSQL extensions are the synchronize statement to make sure multiple data sources contain the same data and the use statement to run SQL across multiple partitions.
::= selectStatement
referenced by:
A SQL select statement retrieves data from one or multiple data containers. A select statement can be composed of multiple data sets retrieved from many platforms, combined by set operators such as 'union'.
Often the performance of cloud platforms is less than traditional database platforms. With the 'limit' clause a limited number of rows can be retrieved quickly from a table or view after applying sorting as specified by the possibly present 'order by'. An alternative for a 'limit' clause is to use the 'top' clause.
A sequence of Invantive UniversalSQL statements, separated by the semi-colon separator character.
Each statement in the SQL batch will be executed consecutively. Execution will be stopped when an error occurs during execution of a statement.
referenced by:
- createOrReplaceViewStatement
- createTableStatement
- embeddedSelect
- expression
- insertStatement
- pSqlForRecordLoopStatement
- sqlStatement
- useStatement
SQL is based upon a solid mathematical foundation named 'set theory' with some exceptions. The set operators of Invantive UniversalSQL enable you to combine sets of data sets such as merging two sets of data. Please note that SQL actually uses 'bags', which opposed to 'sets', allow duplicates. To change bags of data into sets, either use 'distinct' or the 'union' set operator without 'all'. In general, the extensive use of 'distinct' signals bad database design.
The 'union' set operator returns the union of the data on the left and right side of the union while removing duplicate rows. The 'union all' set operator returns the union of the data on the left and right side of the union without removing duplicate rows. The 'minus' set operator returns all rows from the left side which do not occur in the right side. The 'intersect' set operator returns all rows that occur both in the left and right side.
The 'union' set operator has an extension to facilitate sieving overlapping data sets of decreasing quality using 'distinct on'. The 'union distinct on' set operator returns the union of the data sets listed, but only the first match on the column list is returned. The first match is defined as a fallthrough from left to right. In general, the preferred source of a set of data is listed first, followed by one or multiple 'union distinct on' set operators, each with a data set of less preference.
referenced by:
Retrieves a data set from one or more data containers.
::= select executionHints? distinct? topClause? selectList ( into pSqlVariableList )? ( FROM dataSource joinStatements? whereClause? )? groupBy?
referenced by:
A data source can be a table, a table with parameters or a nested select (an 'inline view'). Also, a data source can be a binary or text string being interpreted as XML, CSV, JSON or binary Excel Open XML format.
referenced by:
referenced by:
Execution hints allow you to control individually the execution of SQL statements. Whenever possible, the hints will be used. In contrary to other platforms, Invantive UniversalSQL requires a hint to be valid according to the grammar when specified. This reduces the engineering risk that hints become invalid by accident.
referenced by:
The http_disk_cache-hint specifies whether messages may be cached on disk when the provider uses HTTP to exchange data with the backing platform. This typically holds only for cloud-based platforms such as Exact Online, Teamleader or Salesforce. The default setting is false. The first parameter is a boolean whether data may be taken from the disk cache, the second parameter is a boolean whether data retrieved must be stored also in the disk cache and the third parameter is an interval that specifies the timespan before a disk cache hit found is to considered stale, such as "interval '4 hours'".
The use of the http_disk-cache-hint is recommended for data which is known to change seldom such as seeded or reference data. The contents of the disk cache are persistent across Invantive UniversalSQL sessions.
The disk cache is located in the Cache folder of the Invantive configuration folder.
::= HTTP_DISK_CACHE ( PARENTHESIS_OPEN booleanConstant ( COMMA booleanConstant ( COMMA intervalConstant )? )? PARENTHESIS_CLOSE )?
referenced by:
The http_memory_cache-hint specifies whether messages may be cached in memory when the provider uses HTTP to exchange data with the backing platform. This typically holds only for cloud-based platforms such as Exact Online, Teamleader or Salesforce. The default setting is false. The first parameter is a boolean whether data may be taken from the memory cache, the second parameter is a boolean whether data retrieved must be stored also in the memory cache and the third parameter is an interval that specifies the timespan before a memory cache hit found is to considered stale, such as "interval '4 hours'".
The use of the http_memory-cache-hint is recommended for data which is known to change seldom such as seeded or reference data. The contents in the memory cache are forgotten across Invantive UniversalSQL sessions.
The memory cache is located in the Cache folder of the Invantive configuration folder.
referenced by:
referenced by:
The ods-hint controls the use of the Invantive Data Cache stored in a relational database. The Invantive Data Cache is also the basis of the Operational Data Store managed by Invantive Data Replicator and the data warehouses managed by Invantive Data Vault. The ods-hint specifies the maximum age data from the data cache eligible for use.
The boolean specifies whether the Data Cache may be used to answer a query. Set it to false to disable use of Data Cache for the duration of the query. Keep it on the default true to use Data Cache.
The interval specifies the period of time during which cached results are considered sufficiently fresh for use, such as '30 minutes'.
When no interval is present, the actual platform is consulted. The default with Invantive Data Cache enabled is to always use the data cache contents when not stale according to the metadata of the data cache. In general, that defaults to a maximum age of 7 days.
When use of data cache is requested and a filter is applied, all data will be retrieved from the actual platform and filtered on the Invantive UniversalSQL engine. This also applies when a filter is applied that can be forwarded to the actual platform. When the data is requested a limited number of times and the filter severely reduces the amount of data to be transported, it can be faster to not use the data cache.
referenced by:
referenced by:
referenced by:
Control join approach between two data sources. A column-indexed lookup will be used instead of a full table scan when the number of rows on the left-hand side does not exceed the maximum number of rows specified in the hint. When not specified, a hash lookup will only be used when the number of rows on the left-side does not exceed 5.000.
The actual implementation of a hash lookup depends on the platform on which the data container runs. For instance with OData, a number of requests will be made using an in-construct with a limited number of in-values. With a relation database platform, a native SQL 'in' will be used.
The first identifier is the alias of the table on the right-hand side of the join. The second identifier is the name of the column used to join upon in the right-hand side. The numeric constant specifies upto what number of rows on the left-hand side of the join will allow the join set hint to be used. When the number of rows exceeds the numeric constant, a full table join is made.
The following example takes for instances 5.000 sales invoices from an Exact Online environment with 100.000 sales invoices. Each sales invoice has 4..10 lines. The join does not retrieve all sales invoices nor all invoice lines, but instead fetches the 5.000 sales invoices using the where-clause, and then retrieves the related invoice lines using a column-indexed lookup by invoiceid. Since Exact Online is an OData source, the approximately 30.000 invoice lines will be retrieves in 300 session I/Os each having an in-construct for 100 lines on invoiceid.
referenced by:
The no_join_set hint disables the use of hash-joins. It can be enabled using the join_set hint.
referenced by:
The low_cost-hint specifies that the select with the hint must be considered a select with low execution costs. Low execution costs trigger early evaluation during parsing. By default, select statements using solely in memory storage, dummy and data dictionary are considered low cost and evaluated early. The evaluation of all others is delayed as long as possible.
The use of the low_cost-hint is recommended when the select is used with a 'in ( select ... )' syntax and the developer knows beforehand that it will evaluate fast to values and that the use of these values will allow the use of server-side filtering for the outer select.
referenced by:
Addition of the 'distinct' keyword to a SQL select statement de-duplicates the rows returned. Rows are considered duplicates when the values in all selected columns are identical, with two null-values considered equal.
referenced by:
- avgAggregateFunction
- countAggregateFunction
- listAggAggregateFunction
- productAggregateFunction
- stdDevAggregateFunction
- sumAggregateFunction
- uniqueSelectStatement
With the 'top' clause a limited number of rows can be retrieved quickly from a table or view after applying sorting as specified by the possibly present 'order by'.
referenced by:
With the 'limit' clause a limited number of rows can be retrieved quickly from a table or view after applying sorting as specified by the possibly present 'order by'.
referenced by:
The 'for' clause replaces the results from a query by a single row or a few rows, single column embedding the full results in the respective format.
referenced by:
The 'for csv' clause replaces the results from a query by few rows and asingle column named 'CSV' embedding the full results in an CSV format. Depending on the data volume, multiple rows can be returned. By default, each output row contains at most 1.000 rows from the query. The actual number of rows from the query can be set using 'OUTPUT PER ... ROWS'.
By default, the first output row starts with a technical header of the respective column names. The headers can be replaced by labels using 'INCLUDE HEADERS'. The headers can be fully disabled by adding 'EXCLUDE HEADERS'. The output of headers in each output row can be enabled using 'REPEAT HEADERS' with 'OUTPUT PER ... ROWS'. In that case, 'OUTPUT PER ... ROWS' must always be specified even with the default of 1.000 query rows per output row.
The CSV-output can use other texts than CR/LF for row delimiter, ',' for column delimiter, '"' for quoting character and '\"' for escaped quoting character. These can be set using, respectively, 'ROW DELIMITER', 'COLUMN DELIMITER', 'QUOTING CHARACTER' and 'ESCAPED QUOTING CHARACTER'.
::= CSV ( COMMA ( INCLUDE TECHNICAL? | EXCLUDE ) HEADERS )? ( COMMA ROW DELIMITER stringConstant )? ( COMMA COLUMN DELIMITER stringConstant )? ( COMMA QUOTING CHARACTER stringConstant )? ( COMMA ESCAPED QUOTING CHARACTER stringConstant )? ( COMMA OUTPUT PER numericConstant ( ROW | ROWS ) ( REPEAT HEADERS )? )?
referenced by:
The 'for excel' clause replaces the results from a query by a single row, single column named 'Excel' embedding the full results in an Excel xlsx-file format. The Excel xlsx-file has an Excel worksheet with a name (by default 'resultset1') with the rows.
By default, the first Excel worksheet starts with a header with all column names. No header is added when 'EXCLUDE HEADERS' is specified. Labels can be used as headers by specifying 'INCLUDE HEADERS'.
referenced by:
The 'for json' clause replaces the results from a query by a single column named 'JSON' embedding the full results in a JSON format. Depending on the data volume, multiple rows can be returned. By default, each output row contains at most 1.000 rows from the query. The actual number of rows from the query can be set using 'OUTPUT PER ... ROWS'.
One of the naming configurations is 'AUTO'; the rows return one JSON object, with column names and their values as, respectively, JSON property names and values. A period ('.') in a column name will not influence the generated JSON differently from any other character. Casing of the column names will be reflected in the JSON property names. Casing is upper-case by default in Invantive UniversalSQL. Escape column names using square brackets to accurately specify casing of JSON property names.
As an alternative, the naming configuration can be switched to 'PATH'. With 'PATH', the JSON property names are identical to 'AUTO' except for a period ('.') in a column name. Each period in a column name introduces a new nesting level in the generated JSON.
A wrapper can be specified using 'ROOT'. The default root JSON property name is 'root'. A deviating name can be specified as a text constant between parentheses.
Properties are excluded for column null values. JSON properties are generated even for null values when 'INCLUDE_NULL_VALUES' is present.
The JSON objects are combined into a JSON array unless 'WITHOUT_ARRAY_WRAPPER' is present. With 'WITHOUT_ARRAY_WRAPPER' present, the output becomes NDJSON, where each individual output row represent a single source row.
::= JSON ( AUTO | PATH ) ( COMMA ROOT ( parenthesisOpen stringConstant parenthesisClose )? )? ( COMMA INCLUDE_NULL_VALUES )? ( COMMA WITHOUT_ARRAY_WRAPPER )? ( COMMA OUTPUT PER numericConstant ( ROW | ROWS ) )?
referenced by:
The 'for xml' clause replaces the results from a query by a single row, single column named 'XML' embedding the full results in an XML format.
::= XML ( ( ( RAW ( parenthesisOpen stringConstant parenthesisClose )? | AUTO ) forXmlClauseCommonDirectives ( COMMA ( XMLDATA | XMLSCHEMA ) stringConstant? )? | PATH ( parenthesisOpen stringConstant parenthesisClose )? forXmlClauseCommonDirectives ) ( COMMA ELEMENTS ( XSINIL | ABSENT )? )? | EXPLICIT forXmlClauseCommonDirectives )
referenced by:
::= ( COMMA BINARY BASE_64 )? ( COMMA TYPE )? ( COMMA ROOT ( parenthesisOpen stringConstant parenthesisClose )? )?
referenced by:
An embedded select, also known as an 'inline view', retrieves rows using the specified select statement. These rows are consumed by the outer select as were it the results of retrieving the rows from a table.
Invantive UniversalSQL does not allow grouping rows with expressions as columns. An embedded select is typically used to evaluate expressions to rows with solely constants. After applying the embedded select the group operators can be applied.
referenced by:
referenced by:
A table specification without parameters. The optional alias after the at-sign specifies a specific data source to be used, such as 'exactonlinerest..journals@eolbe' specifying the use of Exact Online Belgium when 'eolbe' is associated by the database definitions in settings.xml with Exact Online Belgium.
A number of special so-called 'service providers' are always present, such as 'datadictionary' for use by an alias.
referenced by:
- alterPersistentCacheConfigureWebhooksStatement
- alterPersistentCacheDropStatement
- alterPersistentCacheSetTableOptions
- alterPersistentCacheTableRefreshStatement
- applyToClause
- createTableStatement
- dropTableStatement
- synchronizeStatement
referenced by:
A table specification requiring a comma-separated list of parameters to determine the rows to be retrieved.
Traditional SQL syntax did not provide for parameterized queries, matching set theory. Modern variants such as pipelined table functions allow a stored procedure or other imperative language-based approaches to generate rows based upon parameter values. Many data containers support queries that returns rows based upon parameter values. This holds especially for SOAP web services. Table specifications with parameters ease queries on such data containers.
The optional alias after the at-sign specifies a specific data source to be used, such as 'exactonlinerest..journals@eolbe' specifying the use of Exact Online Belgium when 'eolbe' is associated by the database definitions in settings.xml with Exact Online Belgium.
no references
tableOrFunctionSpecWithRotator:
referenced by:
A list of parameter value expressions identified by position or name to determine the rows to be retrieved by a tableOrFunctionSpec.
referenced by:
numberedOrNamedExpressionList:
referenced by:
- pSqlFunctionOrProcedureStatementNS
- pSqlPackageProcedureStatementNS
- pSqlRaiseStatement
- tableFunctionSpec
An ordered comma-separated list of value expressions.
referenced by:
An unordered list of value expressions, identified by the parameter name.
no references
A value expression, identified by the parameter name, the association operator '=>' and the value expression.
referenced by:
distributedAliasOrRotatorDirective:
referenced by:
referenced by:
The distributed alias after the at-sign specifies a specific data source to be used, such as 'exactonlinerest..journals@eolbe' specifying the use of Exact Online Belgium when 'eolbe' is associated by the database definitions in settings.xml with Exact Online Belgium.
A number of special so-called 'service providers' are always present, such as 'datadictionary' for use by an alias.
referenced by:
- pSqlPackageProcedureStatementNS
- partitionIdentifierWithAlias
- setIdentifier
- tableOrFunctionSpec
- tableSpec
When multiple data containers have been defined in settings.xml for a database, each one is assigned an alias. An alias typically takes the form of a limited number of characters. The presence of an alias allows Invantive UniversalSQL to precisely determine to what data container forward a request for data.
referenced by:
- alterPersistentCacheRefreshStatement
- distributedAliasDirective
- distributedAliasOrRotatorDirective
- rotatorDirective
passingSourceOrPathExpression:
The passing option specifies the source of the contents to be interpreted. The contents can be specified as the outcome of an expression such as from a previous read_file() table function, a URL downloaded using httpget() or a string concatenation. The contents can also be specified as to be taken from a specific file identified by it's file name and path as an expression.
referenced by:
Data stored in XML format can be interpreted as a data source using the xmltable keyword.
The expression specifies a master XPath expression within the context of which the rows are evaluated using the column specifications.
The passing option specifies the source of the data in XML format. The source is often the outcome of a read_file() table function, a URL download using httpget() or a previous xmlformat() SQL function.
The columns are specified using their XPath relative to the master path.
referenced by:
referenced by:
A literal value containing a valid XML document.
referenced by:
A list of XML table column specifications.
referenced by:
The columns are specified using their XPath relative to the master path.
referenced by:
Data stored in JSON format can be interpreted as a data source using the jsontable keyword.
The expression specifies a master JSON expression within the context of which the rows are evaluated using the column specifications.
The passing option specifies the source of the data in JSON format. The source is often the outcome of a read_file() table function or a URL download using httpget().
The columns are specified using their JSON path relative to the master path.
referenced by:
::= NDJSONTABLE parenthesisOpen ( expression | null )? ( jsonTablePassing | jsonTableLiteral ) ( IGNORE INVALID LINE )? jsonTableColumns parenthesisClose
referenced by:
referenced by:
A literal value containing a valid JSON document.
referenced by:
A list of JSON table column specifications.
referenced by:
The columns are specified using their JSON path relative to the master path.
referenced by:
Data stored in CSV format can be interpreted as a data source using the csvtable keyword.
The passing option specifies the source of the data in CSV format. The source is often the outcome of a read_file() table function or a URL download using httpget().
The interpretation process can be controlled on table level using the table options and the specification ends with the CSV columns being mapped to data source columns using their relative position within a row.
referenced by:
The interpretation process can be controlled on table level using the table options.
The row delimiter is a text that separates two CSV rows, such as "chr(13) || chr(10)" or simply a pipe character "'|'". The default is the operating system-specific variant of new line.
The column delimiter is a text that separates two CSV columns such as "';'". The default is comma.
Data in CSV will typically have one or more header CSV rows labeling the individual columns. The 'skip lines' statement excludes the first number of CSV rows from processing.
referenced by:
A literal value containing a valid CSV document.
referenced by:
referenced by:
A list of CSV table column specifications.
referenced by:
Each CSV column is mapped to a data source column using it's relative position within the CSV row. Position 1 represents the first CSV column. A position can be represented using an integer, or by specifying the keyword 'NEXT'. The keyword 'NEXT' specifies that the column is located one position higher than the previous column. When there is no previous column, the column is located at position 1.
::= identifier sqlDataTypeExtended ( FORMAT stringConstant )? ( POSITION ( numericConstant | NEXT ) )?
referenced by:
Excel file contents in Open XML, such as used with the file extensions 'xlsx' and 'xlsm', can be interpreted as a data source using the exceltable keyword.
The rectangle specifies the rectangular area from which should be taken. The first worksheet is used when no rectangle is specified or a null value as worksheet name. Rows in Excel are interpreted as rows from the data source, whereas columns in Excel are interpreted as columns.
The passing option specifies the source of the (binary) contents in zipped Open XML format such as used with the file extensions 'xlsx' and 'xlsm'. The source is often the outcome of a read_file() table function or URL download using httpget().
The interpretation process can be controlled on table level using the table options and the specification ends with the Excel columns being mapped to data source columns using their relative position within the rectangular area.
referenced by:
The rectangle specifies the rectangular area from which data should be taken. Rows in Excel are interpreted as rows from the data source, whereas columns in Excel are interpreted as columns. All cells within the rectangle are considered data; headings should be excluded from the rectangle specification.
A rectangle can be either:
- The contents of a complete worksheet, specified by an expression returning the worksheet name. Specify null to select the first worksheet independent of name. Alternatively, a worksheet can be selected by specifying an integer indicating the relative position, including hidden worksheet.
- A named range within a specific worksheet, specified by expressions for optional worksheet name and named range name.
- A cell range identified by an expression with it's dimensions including optional worksheet name and a range definition. The range definition can have various forms: 'A1:Z999' or '$A$1:$Z$999' for an area, 'A:Z' or '1:26' for all cells in column(s) or 'A1' to refer to a specific cell.
- An Excel table identified by an expression with the table name.
- A named range identified by an expression with the named range name.
- Unspecified, selecting the first worksheet independent of name.
referenced by:
referenced by:
The interpretation process can be controlled on table level using the table options.
A position can be represented using an integer, or by specifying the keyword 'NEXT'. The keyword 'NEXT' specifies that the column is located one position higher than the previous column. When there is no previous column, the column is located at position 1.
Empty rows will typically be found when consuming a rectangular area larger than the actual data, such a complete worksheet. The 'skip empty rows' statement eliminates all completely empty rows from the output.
referenced by:
A list of Excel table column specifications.
referenced by:
Each Excel column is mapped to a data source column using it's relative position within the rectangular area. Position 1 represents the first Excel column falling within the Excel rectangular area.
referenced by:
Data lossy stored in HTML format can be interpreted as a data source using the htmltable keyword.
The expression specifies a master path within the context of which the rows are evaluated using the column specifications.
The passing option specifies the source of the data in HTML format. The source is often the outcome of a read_file() table function or a URL download using httpdownload(). Full adherence to the HTML format is not required; the parser often automatically corrects many HTML format errors.
The columns are specified using their path relative to the master path.
referenced by:
referenced by:
referenced by:
referenced by:
A list of HTML table column specifications.
referenced by:
Each HTML table column is mapped to a data source column using the path relative to the master path.
referenced by:
Splits a text upon another text, return one row per element.
Parameters:
- Input: The text to split.
- Splitter: Text being split upon.
Returns: a number of rows with one column named 'value'; one per element in the input.
referenced by:
Data stored on the Internet accessible through the HTTP protocol can be interpreted as a data source using the internettable keyword. A depth-first scan is done in which solely unique URLs are returned. Starting of at one starting URL, URLs are downloaded from the Internet consisting of webpages and content. Contents is made available with no further deeper inspection. Webpages in HTML format are scanned for more URLs by default for the following paths:
- //a[@href]: all hrefs in anchors;
- //script[@src]: all sources of scripts;
- //link[@href]: all hrefs of links.
- //img[@src]: all hrefs of images.
The startAtExpression specifies the initial webpage to retrieve data for.
A pre-defined list of columns is available per retrieved URL:
- URL: URL of page;
- Contents_char: the character contents, converted from the original character set into UTF-8;
- Contents_blob: the binary contents;
- Mime_type: MIME-type returned by the web server;
- Http_status_code: numeric HTTP response status code;
- Date_retrieval_utc: date/time when the response was received (UTC);
- Retrieval_duration_ms: time between the request and complete response in milliseconds;
- Bytes_retrieved: number of bytes retrieved;
- Depth: recursion depth, starting at 1 for the initial URL;
- Retrieval_successful: indicator whether the response was completely successful retrieved;
- Last_modified: date/time when the response's content was last modified;
- Etag: ETAG on the content as returned by the web server;
- Content_disposition: preferred file name and encoding to be used;
- Cache_Control: contents of cache-control HTTP response header;
- Expires: contents of the Expires HTTP response header;
- Error_message_code: Invantive UniversalSQL engine error message code if any occurred;
- Error_message_text: Invantive UniversalSQL engine error message code if any occurred.
referenced by:
referenced by:
referenced by:
referenced by:
The process can be controlled using options:
- Stay on site: when present, recursion restricts to URLs on the same host name as the starting URL. Default behaviour is to branch out to other sites too.
- Maximum depth: limit the depth of the recursion to a specific number.
- Ignore errors: do not stop on the first error but continue. Default behaviour is to stop on the first error or result completion, whatever comes first.
referenced by:
::= sqlDataType ( parenthesisOpen numericConstant ( COMMA numericConstant )? parenthesisClose )? ( NOT? NULL )? labeled?
referenced by:
- castExpression
- createTableArgument
- csvTableColumnSpec
- excelTableColumnSpec
- htmlTableColumnSpec
- jsonTableColumnSpec
- xmlTableColumnSpec
::= BFILE
| BIGINT
| BIT
| BLOB
| BOOL
| BOOLEAN
| BPCHAR
| BYTE
| BYTEA
| CHANGES
| CHAR
| CLOB
| DATE
| DATETIME
| DEC
| DECIMAL
| DOUBLE
| FLOAT
| FLOAT4
| FLOAT8
| GUID
| IMAGE
| INT
| INT16
| INT2
| INT32
| INT4
| INT64
| INT8
| INTEGER
| INTERVAL
| LONGBLOB
| LONGTEXT
| MONEY
| NAME
| NCHAR
| NUMBER
| NUMERIC
| NVARCHAR
| OID
| RAW
| REAL
| SERIAL
| SMALLINT
| TEXT
| TIME
| TIMETZ
| TINYBLOB
| TINYINT
| TINYTEXT
| UINT16
| UINT32
| UINT64
| UUID
| VARCHAR
| VARCHAR2
| XML
| XMLTYPE
| YEAR
referenced by:
Grouping of multiple rows into groups is specified by the groupBy. A group will be introduced for each distinct combination of column values for the columns listed. The values of grouped columns can be used in the select clause. Columns not being grouped upon can only be used within the context of a group function listed as 'aggregateFunction'.
referenced by:
Sort the rows returned as specified by the list of columns. Values are either sorted ascending (the default) or descending.
referenced by:
referenced by:
A sort direction can be either 'asc' for 'ascending' (the default) or 'desc' for 'descending'.
referenced by:
A comma-separated list of columns.
referenced by:
- allColumnsSpecId
- groupBy
- identifiedByMultipleClause
- insertFieldList
- setOperatorSelectStatement
- synchronizeInsertStatement
- synchronizeUpdateStatement
An ordered comma-separated list of column values to sort upon.
referenced by:
A column values to sort upon.
referenced by:
A column is identified by an identifier, possibly prefixed by the name of the table or the alias of the table from which the column is to be taken.
referenced by:
referenced by:
referenced by:
The where-clause restricts the number of rows in a result set by applying one or more boolean condiditions which rows must satisfy.
referenced by:
A list of join statements.
referenced by:
A join statement combines two result sets. Only combinations of rows taken from both result sets are returned when they meet the join conditions.
referenced by:
The join category specifies what combinations of rows are considered. The following variants can be used:
- inner join, as indicated by 'join' or 'inner join': an inner join returns all combinations of rows from both result sets that meet the join conditions.
- left outer, as indicated by 'left outer join': a left outer join returns the same rows as an inner join, extended by one row for each row in the left result set having no matching rows in the right result set. Each column that originates from the right result set is assigned a null value.
- right outer, as indicated by 'right outer join': a right outer join returns the same rows as an inner join, extended by one row for each row in the right result set having no matching rows in the left result set. Each column that originates from the left result set is assigned a null value.
- full outer, as indicated by 'full outer join': a full outer join returns the same rows as an inner join, extended by one row for each row in the right result set having no matching rows in the left result set. Each column that originates from the left result set is assigned a null value. The results are also extended by one row for each row in the left result set having no matching rows in the right result set. Each column that originates from the right result set is assigned a null value.
- cross join, as indicated by 'cross join': a cross join returns a Cartesian product of the rows from both result sets. A 'Cartesian product' is a term from set theory, which indicates that all combinations are returned.
referenced by:
The join sub-category refines the join category. Please see 'joinCategory' for an explanation.
referenced by:
referenced by:
referenced by:
referenced by:
Extracts a substring from a value with the given length from the left side. Keyword is also used with joins.
Parameters:
- Input: Text to extract substring from.
- Length: Maximum length of the substring.
referenced by:
Extracts a substring from a value with the given length from the right side. Keyword is also used with joins.
Parameters:
- Input: Text to extract substring from.
- Length: Maximum length of the substring.
referenced by:
referenced by:
referenced by:
Group function to sum together individual numerical values. Occurrences of null are considered 0, unless there are only null values. In that case the outcome is null.
referenced by:
Group function to multiply together individual numerical values. Multiplying large values can quickly exceed the range of the resulting Decimal data type. The product group function is typically used in financial and probability calculations with values near 1.
referenced by:
Group function to find the minimum value from a group of numerical values or text values.
referenced by:
Group function to find the maximum value from a group of numerical values or text values.
referenced by:
Group function to find the average value from a group of numerical values.
referenced by:
Group function to the first non-null value in an ordered result set.
referenced by:
Group function to the last non-null value in an ordered result set.
referenced by:
Group function to find the standard deviation from a group of numerical values.
referenced by:
Group function to find the number of values from a group of values.
referenced by:
Group function which concatenates all individual values, separated by the separator when provided and comma plus space otherwise.
referenced by:
referenced by:
referenced by:
A boolean expression which defines valid combinations of the join.
referenced by:
referenced by:
referenced by:
referenced by:
Defines the textual label of an expression. The label may contain resources in the format '{res:resource code}' such as 'My {res:itgen_description}'. Similar to the data type and alias of an expression, the label is maintained across selections. Application of a calculation or a SQL function resets the label to the empty value. User interfaces can choose to display the label when available instead of the column name to provide a more natural interface.
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
Selects all columns from an SQL statement or from one or more data sources used. The except clause allows retrieval of all columns except the listed names. The except clause is typically used with wide result sets with possibly varying column lists from which a few pre-defined columns need to excluded.
referenced by:
referenced by:
allColumnsSpecColumnNamePrefix:
Defines the aliases of columns selected by the select all ('*') operator as their original alias, prefixed with the specified text. For instance, the select operator changes the alias of the column 'code' with the prefix 'pjt_' into 'pjt_code'.
referenced by:
allColumnsSpecColumnNamePostfix:
Defines the aliases of columns selected by the select all ('*') operator as their original alias, postfixed with the specified text. For instance, the select operator changes the alias of the column 'code' with the postfix '_from_table2' into 'code_from_table2'.
referenced by:
Defines the label of columns selected by the select all ('*') operator as their original label, prefixed with the specified text. For instance, the select operator changes the label 'Name' of the column 'name' with the prefix 'Project ' into 'Project Name'.
referenced by:
Defines the label of columns selected by the select all ('*') operator as their original label, postfixed with the specified text. For instance, the select operator changes the label 'Name' of the column 'name' with the postfix ' from Table2' into 'Name from Table2'.
referenced by:
All available Data Definition Language statements.
referenced by:
alterPersistentCacheStatement:
Besides an in-memory cache valid during the duration of a session, Invantive UniversalSQL offers an integrated cache storing data persistently using an on-premise or cloud relation database such as SQL Server or PostgreSQL. When configured, Invantive UniversalSQL first tries to find sufficiently fresh data in the cache. This reduces the number of data loads from slow data containers such as some cloud platforms. In general, the performance increase when the rows can be fully retrieved from a cache is between a factor 25 and 2.500.
Invantive UniversalSQL itself manages the table structure and table contents in the relation database used as a data cache. On initial use just provide an empty database. Platforms supported include SQL Server, Oracle, PostgreSQL and MySQL. Invantive UniversalSQL installs a repository consisting of a dozen tables. The repository tables have names starting with 'dc_'. Most repository tables are accompanied by a view named identically except for a trailing '_r'. The repository views join in all master tables for easier analysis from within the database storing the facts
For each table partition version, a so-called 'facts table' is created. A facts table contains a full copy of the rows retrieved from the data container. Facts tables have names starting with 'dcd_', followed by a unique hash signaling the table partition version. When necessary, additional database objects are maintained such as indexes to improve performance. As with facts table names, all column names are also hashed based upon an algorithm including the original column name. These facts tables are not intended for direct use using native SQL.
Each facts table has a unique state from the following state, with 'Ready' state signaling the now current version:
- Initializing ('I'): the facts table will be created.
- View creation ('V'): logical views will be created.
- Prepared ('P'): the facts table has been created, but contains yet no rows.
- Seeding ('S'): the facts table is being seeded with the contents of the previously current version.
- Loading ('L'): loading new facts from data container using water shed or another algorithm.
- Ready ('R'): the facts table is available and the current one to be used.
- Obsoleted ('O'): the facts table still exists, but the data has passed it's conservation period. Often a newer version is now current.
- Dropped ('D'): the facts table now longer exist, but the metadata is still present in the repository tables.
The persistent cache in the database can be used with native SQL when extended by Invantive Data Replicator. Invantive Data Replicator creates and maintains database view (a so-called 'partition view') for the now current version of table partition. Similarly, it can create an 'overall view', showing the rows across all partitions of the now current versions per partition. Invantive Data Replicator provides a high-performance high-volume operational data store with little effort.
The overall views are typically used for consolidation purposes, bringing together data across multiple companies or persons.
referenced by:
alterPersistentCachePurgeStatement:
The purge statement maintains existing table partition versions. It can purge table partition versions in several ways:
- ready: selects table partition versions in state 'Ready'. Table partition versions go to the state 'Obsolete'. Facts are not deleted.
- obsolete: selects table partition versions in state 'Obsolete'. Facts of these are deleted. Table partition versions go to the state 'Dropped'. The facts deleted might be used in queries that started running when they were in state 'Ready'. The facts can be deleted safely when the database platform supports reading data from rows already deleted, such as Oracle. Purging obsolete facts is often used as a background task separate from refresh to reduce refresh runtime, since the deletion of data has already been executed before refresh.
- droppable: selects table partition versions in state 'Dropped'. Facts are deleted when any facts are still present despite the dropped state. This will be useful only under exceptional conditions, since the dropped status indicates that the facts have already been deleted. Table partition versions remain in the state 'Dropped'.
- unknown: queries the database for tables that match the naming convention 'dcd...' or 'dcs...'. Any tables that is not registered in the repository is dropped from the database. This will be useful only under exceptional conditions, such as when the repository was unintentionally dropped or incorrectly changed.
- all: first performs the "unknown" variant and then the "droppable" variant. This will be useful only under exceptional conditions.
Specification of a data container limits the scope of table partition versions to the listed data container.
PSQL Packages
Invantive PSQL comes with a number of pre-defined packages. These can be queried using SystemPackages@DataDictionary and SystemPackageFunctions@DataDictionary.
::= ALTER PERSISTENT CACHE PURGE ( UNKNOWN | DROPPABLE | OBSOLETE | READY | ALL ) TABLE PARTITION VERSIONS ( DATACONTAINER stringConstant )?
referenced by:
alterPersistentCacheDownloadStatement:
::= ALTER PERSISTENT CACHE DOWNLOAD FEED ( LICENSE CONTRACT CODE stringConstant )? ( TOKEN stringConstant )? ( DATACONTAINER stringConstant )? ( PARTITION partitionSimpleIdentifier )? ( LIMIT numericConstant )? ( NO DELETE )?
referenced by:
alterPersistentCacheConfigureWebhooksStatement:
Trickle loading is an advanced strategy to efficiently and fast update the replicate data. It requires all changes (insert, update and delete) on the replicated data to be registered as event messages, possibly with a delay.
The registration of changes can be activated on selected platforms using webhooks. Configuration of the webhooks can be done using this statement. When not specified, the license contract code of the current subscription will be used.
::= ALTER PERSISTENT CACHE ( ENABLE | DISABLE ) WEBHOOKS ( LICENSE CONTRACT CODE stringConstant )? ( TABLE tableSpec )? ( PARTITION partitionSimpleIdentifier )? ( DATACONTAINER stringConstant )?
referenced by:
alterPersistentCacheRefreshStatement:
This statement triggers the refresh of replicated data of all connected or one specific data container alias. A refresh of the replicated data can also be triggered by executing a SQL statement specifying the use of replicated data of a specific age.
In default mode (without 'force'), a refresh is only executed on the specified data when the age of the replicated data exceeds the configured maximum age. An offset to the configured maximum age can be specified using the 'when obsolete within' clause. With a forced refresh, the replicated data is always replaced by a recent version of the source data.
The maximum number of parallel processes to replicate the data can be configured using the 'parallel' clause to increase throughput and decrease runtime of the replication process.
By default the approach as configured on each table partition is used to update the replica. However, a deviating approach can be specified. This is typically done after switching a table partition to trickle loading to run a one-time replication process with the copy approach, effectively ensuring that no changes have gone unnoticed.
::= ALTER PERSISTENT CACHE FORCE? REFRESH ( DATACONTAINER dataContainerAlias? )? ( PARALLEL numericConstant )? ( APPROACH ( COPY | TRICKLE | SAMPLE | DEFAULT ) )? ( WHEN OBSOLETE WITHIN intervalConstant )?
referenced by:
alterPersistentCacheLoadStatement:
Loads all available tables across all connected data containers in the cache. Typically used for demonstrations.
referenced by:
alterPersistentCacheTableRefreshStatement:
Refresh all data of a specificied table. The options are explained at alterPersistentCacheRefreshStatement.
::= ALTER PERSISTENT CACHE TABLE tableSpec FORCE? REFRESH ( PARTITION partitionIdentifierWithAlias )? ( PARALLEL numericConstant )? ( APPROACH ( COPY | TRICKLE | SAMPLE | DEFAULT ) )? ( WHEN OBSOLETE WITHIN intervalConstant )?
referenced by:
alterPersistentCachePartitionRefreshStatement:
Refresh all data of a specificied partition. The options are explained at alterPersistentCacheRefreshStatement.
::= ALTER PERSISTENT CACHE PARTITION partitionIdentifierWithAlias FORCE? REFRESH ( PARALLEL numericConstant )? ( APPROACH ( COPY | TRICKLE | SAMPLE | DEFAULT ) )? ( WHEN OBSOLETE WITHIN intervalConstant )?
referenced by:
alterPersistentCacheDropStatement:
Drops all facts stored in table partition versions from the database and the associated metadata from the cache repository for a range of table partitions. The tables and partitions can be specified using the table, partition and/or data container clause.
referenced by:
alterPersistentCacheSetStatement:
Change central maintained settings for the cache, including defaults for new tables or data containers.
- Fresh: number of seconds during which facts are considered fresh after completiong of loading.
- Retention: number of seconds trickle loading messages are kept available after processing.
- Metadata: number of seconds metadata are kept available after dropping the associated facts in the version.
- Metadata recyclebin: number of seconds dropped metadata are kept available.
- Version: reset data model version to another version to re-run upgrade scripts.
- Token: secret text used as token to retrieve trickle loading messages.
- Prefix/postfix: prefix and/or postfix for Invantive Data Replicator maintained database views.
- Maintain: indicator whether to maintain database views per table partition and/or overall table.
- My Messages: indicator whether to place a copy of incoming trickle loading messages in dc_my_incoming_messages for custom code purposes.
- Index Tables: whether to create non-unique indexes on the facts tables for speedier retrieval on database views.
- Auto Upgrade: whether to run a data model upgrade once.
- Data Container Prefix/postfix: prefix and/or postfix for Invantive Data Replicator maintained overall views.
::= ALTER PERSISTENT CACHE SET ( ( FRESH | RETENTION FORWARDED INCOMING MESSAGES | METADATA? RECYCLEBIN | DATA MODEL VERSION ) numericConstant | TOKEN stringConstant | LOGICAL ( OVERALL | PARTITION ) VIEW ( NAME ( PREFIX | POSTFIX ) stringConstant | MAINTAIN booleanConstant ) | ( LOAD MY MESSAGES | INDEX ( FACTS | HISTORY ) TABLES ) booleanConstant | AUTO UPGRADE ONCE | alterPersistentCacheSetBackingConnectionString | alterPersistentCacheSetTableOptions | alterPersistentCacheSetDataContainerOptions )
referenced by:
alterPersistentCacheSetBackingConnectionString:
referenced by:
alterPersistentCacheSetDataContainerOptions:
Changes settings of a data container.
The prefix and postfix for the logical overall view of every new replicated table registered in the data container can be specified. This is typically done when there are multiple data containers pointing to the same platform, such as multiple subscriptions on Teamleader or multiple Exact Online countries. Specifying a prefix ensures that the automatically generated logical overall views have unique and recognizable names.
referenced by:
alterPersistentCacheSetTableOptions:
Change table and table partition-specific settings for the cache. Most options are identical as on alterPersistentCacheSetStatement, but intended for table/table partition-specific deviations. Additional options are:
- State: move all table partition versions with an earlier state to a specific end state.
- Approach: replication strategy.
::= TABLE tableSpec ( LOGICAL ( OVERALL VIEW ( MAINTAIN booleanConstant | NAME stringConstant ) | PARTITION VIEW ( MAINTAIN booleanConstant | NAME ( PREFIX | POSTFIX ) stringConstant ) ) | INDEX ( FACTS | HISTORY ) TABLE booleanConstant | STATE ( OBSOLETE | DROPPED ) | ( PARTITION partitionIdentifier )? APPROACH ( COPY | TRICKLE | SAMPLE ) ( LICENSE CONTRACT CODE stringConstant )? )
referenced by:
referenced by:
Various statement to alter properties of the current Invantive UniversalSQL session.
referenced by:
Change properties of the current Invantive UniversalSQL session.
referenced by:
alterDataDictionarySetStatement:
Associate a persistent store to back the data dictionary associated with the current Invantive UniversalSQL session. The persistent store will be used to store and retrieve object definitions such as PSQL procedures, SQL views and PSQL packages.
referenced by:
alterDataDictionarySetBackingConnectionString:
referenced by:
Change the ID used for to charge usage to another agreement as specified in the license. Used solely for hosted use of Invantive products to associate usage statistics on data downloaded, data uploaded, users, devices and partitions use with individual billing IDs. Original agreement code is also registered.
referenced by:
alterSessionSetPoolIdentityId:
referenced by:
alterSessionSetBillingReference:
Add a reference to billing to differentiate use within one agreement depending on application. department or operating company. Typically used for use of Invantive products by larger organizations to associate usage statistics on data downloaded, data uploaded, users, devices and partitions use with individual billing references.
referenced by:
alterSessionSetIuidSourceAlias:
Use a non-default data container to determine the IUID (Invantive User ID) value used for identifying the user. Typically used when the actual user is available in a user directory of a higher-numbered data container.
referenced by:
Invantive UniversalSQL offers advanced security features including row-level security with custom connectors. Sets the active roles of a session.
referenced by:
Create a table on the specified platform, filled with data from the select statement. An error is raised when a table with the same name already exists. When 'or replace' is specified, a previously existing table with the same name is dropped before creating the new table.
A new table is assigned a technical primary key column. Also indexes are created by default where deemed useful.
referenced by:
referenced by:
Drop the specified table on the specified platform. An error is raised when no table exists by that name.
referenced by:
referenced by:
Replaces the value of a provider attribute by a new value.
referenced by:
referenced by:
referenced by:
Execute a file loaded from the operating system with Invantive UniversalSQL statements. The file may contain Invantive UniversalSQL and Procedural SQL. The use of Invantive Script is not possible; Invantive Script is a client-side solution such as with Invantive Data Hub and Invantive Query Tool.
referenced by:
A begin transaction statement initiates a transaction. Invantive UniversalSQL typically provides no transaction logic given the distributed nature and the limitations of the possible platforms. Some platforms enable collection of transaction data, which are to be handed over to the backing platform all together.
referenced by:
Forgets all collected transaction data not yet handed over to the backing platform.
referenced by:
Hand over all collected transaction to the backing platform for registration.
referenced by:
The use statement enables you to specify which partitions should be accessed by subsequent select, insert, update and delete statements. You can specify one or multiple partitions as a comma-separated list, possibly for a specific data container by appending an at-sign plus data container alias to the partition code. The value 'default' has a special meaning; it specifies to use the partition(s) originally selected when you logged on. The value 'all' also has a special meaning: it selects all partitions available.
For instance, to select partition '35' in the data container with alias 'eolnl' and partition '57345' in the data container with alias 'nmbrsnl', you can execute: 'use 35@eolnl, 57345@nmbrsnl'.
For complex scenarios, you can specify any valid Invantive UniversalSQL select statement which returns one or two columns. Each row from the query specifies one partition to select. The first column specifies the partition code, whereas the optional second column specifies a specific data container alias.
For instance, to select partition '35' in the data container with alias 'eolnl' and partition '57345' in the data container with alias 'nmbrsnl', you can execute: 'use select '35', 'eolnl' from dual@datadictionary union all select '57345', 'nmbrsnl' from dual@datadictionary'.
::= USE ( partitionIdentifiersList ( ( EXCLUDE | EXCLUDING ) partitionIdentifiersList )? | selectStatement )
referenced by:
referenced by:
A partition identifier uniquely identifies one or more partitions from the currently available data containers.
The special partition identifer 'default' stands for all partitions that were chosen as default partitions during setup of the database connection. The special partition identifier 'all' stands for all available partitions across all available database connections.
referenced by:
referenced by:
- alterPersistentCachePartitionRefreshStatement
- alterPersistentCacheTableRefreshStatement
- partitionIdentifiersList
referenced by:
Inserts data into a table of a data container. Low-volume inserts are typically done one-by-one, but on some connectors a bulk insert option is available which inserts data with multiple rows each time for higher throughput. The data can be sourced from a select statement or values. Values can be single row values or multiple row values.
The identified by and attach to clause can be used in combination with transactions. The identified by clause is used to register the master rows, whereas the attach to clause is used to register which master rows detail rows belong. Upon commit, a master row (such as a sales order) plus it's details (such as multiple sales order lines) are sent to the target data container.
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
Updates data in a table of a data container.
referenced by:
referenced by:
referenced by:
Deletes data from a table of a data container.
referenced by:
referenced by:
The compare and synchronize statement provides one-directional and bi-directional analysis and alignment of data in two tables. It is an extended version of SQL statements like 'upsert' and 'merge' found in other SQL implementations. The two tables can be located in any data container, enabling cross-data container synchronization of data. In most scenarios, data from both tables is downloaded and compared by Invantive UniversalSQL to establish the necessary actions. Some specific edge cases may have an optimized algorithm to reduce data downloads.
In an one-directional approach as specified by 'FROM' or 'TO', the compare and synchronize statement typically analyzes the differences between two tables with identically named columns, separating all differences in one out of four groups:
- insert: a row exists in the source table and not in the target table;
- delete: a row exists not in the source table, but exists in the target table;
- update: the row exists in both tables, but the column contents are different.
- none: the row exists in both tables and the column contents are equal.
Column values are matched on name. By default, all columns having identical names will be included in the matching process. The IGNORE CHANGES TO can be used to exclude columns by name from matching. IGNORE CHANGES TO is typically used to exclude changes in column values that should not trigger a DML-operation, such as for columns whose value is managed by the application, including technical keys.
Unique rows in both tables are matched using the list of columns as specified by 'IDENTIFIED BY'. Identifying column NULL contents are considered a specific value for matching; when both the source and target table have a null value, the rows will be compared. Rows are completely ignored for synchronisation when all identifying columns have a NULL value and the clause "IGNORE NULLS" is present.
After analysis, the differences are be filtered to only contain a combination of DML-operations specified using 'WITH'. Applying all DML-operation types INSERT, UPDATE and DELETE would result in the two tables having identical contents. The INSERT and UPDATE operations by default apply to all columns, but columns can be excluded using 'ALL EXCEPT'. Excluding columns is typically used to leave out audit columns such as a generated creation date upon update and still have it included upon insert.
For better performance, SYNCHRONIZE uses bulk operations where supported, similar to CREATE TABLE and BULK INSERT. Bulk operations typically offer an order of magnitude better performance than solitary operations. However, error handling can be inconvenient since it is often unclear what specific row triggered an error. Neither, it is not easily established upon an error whether other rows in the same bulk operation have been processed. The BATCHSIZE clause allows specification of the number of rows per bulk operation with a minimum of 1. When BATCHSIZE is not specified, a platform-specific default value is used which can fluctuate due to dynamical management by Invantive UniversalSQL.
In case of bi-directional synchronization, the RESOLVE BY-clause enables specification of the column names whose value determines what table contains the preferred row with most current values. Using left-to-right column preference, both values of all RESOLVE BY columns are inspected. The first column value to have a higher ranking value than the other table selects that table as source. The logic is reversed for a column from higher ranking to lower ranking when DESC is specified. The RESOLVE BY-clause is typically used in combination with columns containing a (synchronized) timestamp value such as UTC time.
The 'APPLY TO' syntax is reserved for future use to allow routing DML-operations to other tables.
By default, the SYNCHRONIZE statement will fail upon the first error returned. Using 'CONTINUE ON FIRST ... ERRORS' the threshold can be increased. However, the statement will fail whenever any error occurred during execution.
The differences can then either be:
- applied on the target table to make the tables equal, using bulk operations where supported.
- returned as a list of content differences (not yet available);
- returned as Invantive UniversalSQL DML-statements to achieve synchronization (not yet available).
After checking the output of the last two categories of analysis, the differences can be applied on the target table.
In a bi-directional approach, as specified by using the keyword 'AND' between the two table identifiers, deletes will not be returned from the analysis phase and replaced by inserts on the table which does not have the other table's row. The determination of the target for an update is based upon the resolution preference as defined by the 'RESOLVE' clause. The 'RESOLVE' clause either defines to always apply updates to the left or right table as listed in their order using, respectively, 'PREFER LEFT' and 'PREFER RIGHT'. Otherwise, the combined ordinal value of the columns listed in the 'RESOLVE BY' clause will be evaluated and the highest-ranking value is considered to be leading.
::= ( synchronize | COMPARE ) tableSpec ( TO | AND ) tableSpec synchronizeIudStatement? identifiedByMultipleClause synchronizeIgnoreClause? resolveByClause? applyToClause? synchronizeMaxRowsStatement? ( RETURNING ( ROWS AFFECTED | OVERVIEW | SQL | DIFFERENCES ) )? ( BATCHSIZE numericConstant )? ( CONTINUE ON FIRST numericConstant ERRORS )? synchronizeTriggers?
referenced by:
referenced by:
referenced by:
::= UPDATE ( ALL ( EXCEPT columnList )? )? ( IGNORE CHANGES TO columnList )? synchronizeMaxRowsStatement?
referenced by:
referenced by:
referenced by:
- synchronizeDeleteStatement
- synchronizeInsertStatement
- synchronizeStatement
- synchronizeUpdateStatement
no references
referenced by:
referenced by:
Creates a database view with associated user-definable SQL select statement. The SQL select statement can access any object available across all data containers of the database. A view is stored for the duration of the session in the data dictionary under the catalog DataDictionary and schema Custom. When creating, modifying and deleting views, it is not necessary to specify that the view is in the data dictionary; after all, there is only one place it can be. All defined views can be queried for using the data dictionary view SystemViews. Upon creation, the view is checked for syntactic correctness. Syntactic errors will lead to an error, unless the force option is also specified in the statement. Upon execution, both syntax and validity are checked against the then active database.
To run the query associated with the view, it is necessary to specify the origin via the DataDictionary alias. A view can be embedded through it's name in another view or query as if it was a table.
Database views complement the unalterable Invantive-provided views which are always driver-specific. Invantive's driver-views only combine data within the driver that contains it. Driver-views can always be used independent of the database they are contained in, as long as the driver is present.
referenced by:
The drop view statement removes a view from the data dictionary. After completion, the view can no longer be used by queries being started.
referenced by:
::= SYNCHRONIZE
| SYNC
referenced by:
referenced by:
referenced by:
referenced by:
Evaluates to an expression, testing a number of boolean expressions in sequence for true. When no boolean expression evaluates to true, the else expression is returned.
referenced by:
referenced by:
referenced by:
Changes the data type of an expression into the indicated data type.
referenced by:
referenced by:
- avgAggregateFunction
- castExpression
- countAggregateFunction
- createTableStatement
- csvTableSpec
- embeddedSelect
- excelTableSpec
- expression
- firstAggregateFunction
- forJsonClause
- forXmlClause
- forXmlClauseCommonDirectives
- functionExpression
- htmlTableSpec
- insertFieldList
- insertValues
- insertValuesBulk
- internetTableSpec
- jsonTableSpec
- lastAggregateFunction
- listAggAggregateFunction
- maxAggregateFunction
- minAggregateFunction
- ndjsonTableSpec
- now
- pSqlFunctionOrProcedureStatementNS
- pSqlPackageProcedureStatementNS
- pipelinedTableFunctionSpec
- pivotClause
- productAggregateFunction
- sqlDataTypeExtended
- stdDevAggregateFunction
- stringOrChar
- stringSplitSpec
- sumAggregateFunction
- tableFunctionSpec
- utc
- xmlTableSpec
- zipAggregateFunction
referenced by:
- avgAggregateFunction
- castExpression
- countAggregateFunction
- createTableStatement
- csvTableSpec
- embeddedSelect
- excelTableSpec
- expression
- firstAggregateFunction
- forJsonClause
- forXmlClause
- forXmlClauseCommonDirectives
- functionExpression
- htmlTableSpec
- insertFieldList
- insertValues
- insertValuesBulk
- internetTableSpec
- jsonTableSpec
- lastAggregateFunction
- listAggAggregateFunction
- maxAggregateFunction
- minAggregateFunction
- ndjsonTableSpec
- now
- pSqlFunctionOrProcedureStatementNS
- pSqlPackageProcedureStatementNS
- pipelinedTableFunctionSpec
- pivotClause
- productAggregateFunction
- sqlDataTypeExtended
- stdDevAggregateFunction
- stringOrChar
- stringSplitSpec
- sumAggregateFunction
- tableFunctionSpec
- utc
- xmlTableSpec
- zipAggregateFunction
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
no references
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
triggerRecordVariableExpression:
referenced by:
Greater then is a binary operator which returns true when the left value is greater than the right value. When one of both values is null, the outcome is null. Otherwise it is false.
referenced by:
Greater or equal is a binary operator which returns true when the left value is greater than or equal to the right value. When one of both values is null, the outcome is null. Otherwise it is false.
referenced by:
Less then is a binary operator which returns true when the left value is less than the right value. When one of both values is null, the outcome is null. Otherwise it is false.
referenced by:
Less or equal is a binary operator which returns true when the left value is less than or equal to the right value. When one of both values is null, the outcome is null. Otherwise it is false.
referenced by:
Equals is a binary operator which returns true when the left value and right value are identical. When one of both values is null, the outcome is null.
referenced by:
Not equals is a binary operator which returns true when the left value and right value are not identical. When one of both values is null, the outcome is null.
referenced by:
Like is a operator which returns true when the left value matches the right value. Occurrences of '%' in the right value can be matched by a sequence of 0, 1 or more characters in the left value. Occurrences of '_' in the right value can be matched by any character in the left value.
referenced by:
Between is a tertiary operator which returns true when the left value has a value between the second and third values, including edges. When one of values is null, the outcome is null.
referenced by:
In is a n-ary operator which returns true when the left value is one of the right-hand side values after the 'in'.
referenced by:
| expression ( ( concat | times | divide | plus | minus | gt | ge | lt | le | eq | neq | and | or ) expression | not? ( ( like | between expression and ) expression | in_ parenthesisOpen ( expression ( COMMA expression )* | selectStatement ) parenthesisClose ) | is ( not? NULL | expression ) )
referenced by:
- alterSessionSetBillingId
- alterSessionSetBillingReference
- alterSessionSetIuidSourceAlias
- alterSessionSetPoolIdentityId
- arrayExpression
- attachToClause
- avgAggregateFunction
- caseElseExpression
- caseWhenThenExpression
- castExpression
- csvTableLiteral
- csvTableOptions
- excelDataRectangle
- excludeExpression
- expression
- expressionList
- firstAggregateFunction
- htmlTableLiteral
- identifiedByClause
- insertValuesList
- joinConditions
- jsonTableLiteral
- jsonTableSpec
- lastAggregateFunction
- maxAggregateFunction
- minAggregateFunction
- namedExpression
- ndjsonTableSpec
- numberedOrNamedExpressionList
- pSqlAssignmentStatement
- pSqlContinueStatement
- pSqlElsIfExpression
- pSqlExecuteImmediateStatementNS
- pSqlExecuteNativeStatementNS
- pSqlExitStatement
- pSqlForNumberLoopStatement
- pSqlIfStatement
- pSqlItemDeclaration
- pSqlReturnStatement
- pSqlWhileLoopStatement
- part
- passingSourceOrPathExpression
- productAggregateFunction
- setStatement
- sitemapExpression
- startAtExpression
- stdDevAggregateFunction
- stringSplitSpec
- sumAggregateFunction
- updateValue
- whereClause
- xmlTableLiteral
- xmlTableSpec
referenced by:
referenced by:
::= ( abs | acos | anonymize | ascii | ascii_to_blob | asin | atan | atan2 | base64_decode | base64_encode | basename | bit_length | octet_length | camel | ceil | chr | charindex | coalesce | concat_func | cos | covfefify | compress | convertnumerictotime | csvdecode | csvencode | uncompress | dateadd | datepart | date_ceil | date_floor | date_round | date_trunc | day | dayofweek | dayofyear | dense_rank | decode | double_metaphone | double_metaphone_alt | excel_day | exp_func | floor | from_unixtime | gzip | hex_to_blob | hour | httpget | httpget_text | httppost | initcap | instr | is_vies_eu_vat | is_boolean | is_date | is_email | is_eu_vat | is_eu_vat_reason | is_guid | is_iban | is_number | is_phone_number | is_uri | jsondecode | jsonelement | jsonencode | left | length | levenshtein | ln | log | lower | lpad | ltrim | ltrimtext | md5 | metaphone | metaphone3 | metaphone3_alt | microsecond | millisecond | minute | mod | month | new_time | newid | number_to_speech | normalize | nvl | object_exists | phone_number_to_e164 | phone_number_to_international | phone_number_to_national | phone_number_type | power | quarter | barcode_qr | barcode_qr_epc | quote_ident | quote_literal | quote_nullable | raise_error | random | random_blob | rand | rank | regexp_instr | regexp_replace | regexp_substr | remainder | replace | repeat | reverse | right | round | row_number | rpad | rtrim | rtrimtext | second | sha1 | sha256 | sha384 | sha512 | shorten | sign | sin | soundex | split_part | sql_variant | sqrt | substr | sys_context | tan | to_array | to_binary | to_boolean | to_char | to_date | to_number | to_guid | to_hex | translate | translate_resources | trim | trunc | unicode_to_blob | unistr | unix_timestamp | upper | urldecode | urlencode | htmldecode | htmlencode | user | ungzip | vies_eu_vat_address | vies_eu_vat_country | vies_eu_vat_name | xmlcomment | xmldecode | xmlencode | xmlelement | xmlformat | xmltransform | year | add_months | months_between | zero_blob | zlib_compress | zlib_decompress | IDENTIFIER ) parenthesisOpen expressionList? parenthesisClose
| random
| rand
| now
| utc
| user
| sqlerrm
| sqlcode
referenced by:
Returns the absolute value of a number.
Parameters:
- Input: A number that is greater than or equal to System.Double.MinValue, but less than or equal to System.Double.MaxValue.
referenced by:
Returns the angle of the provided cosine.
Parameters:
- Input: the cosine to get the angle of.
referenced by:
Anonymize a text or number. Anonymization is executed such that when the same original value is anonymized within the same session, the anonymized value will be identical. The anonymized value also uniquely matches the original value. With no access to the anonymization map however, the original value can however not be calculated from the anonymized value.
In mathematics, the anonymization function is a bijection: each element of the original set is paired with exactly one element of the anonymized set, and each element of the anonymized set is paired with exactly one element of the original set.
Parameters:
- Value: A text or number to be obfuscated.
- Maximum length (optional): Maximum length in digits for numbers or characters for text of anonymized value. Null means no restriction on maximum length.
- Mapping (optional): algorithm to use. The default algorithm is 'DEFAULT' which maps text values to a range of hexadecimal characters and numbers to a range of numbers. Alternative mappings are described below.
- DEFAULT: the default algorithm.
- IVE-GL-JOURNAL-DESCRIPTION: general ledger journal descriptions: no preferred anonymizations, leave familiar and non-confidential descriptions in original state.
- IVE-GL-ACCOUNT-DESCRIPTION: general ledger account descriptions: no preferred anonymizations, leave familiar and non-confidential descriptions in original state.
- IVE-PSN-FIRST-NAME: person first names: prefer readable alternative first names, anonymize all.
- IVE-PSN-LAST-NAME: person last names: prefer readable alternative last names, anonymize all.
- IVE-ADS-CITY-NAME: address city names: prefer readable alternative city names, anonymize all.
- IVE-ADS-STREET-NAME: address street names: prefer readable alternative street names, anonymize all.
referenced by:
Get the position of a character on database character set.
Parameters:
- Input: character to get position from.
referenced by:
referenced by:
Returns the angle of the provided sine.
Parameters:
- Input: the sine to get the angle of.
referenced by:
Returns the angle of the provided tangent.
Parameters:
- Input: the tangent to get the angle of.
referenced by:
Returns the angle of the provided tangent.
Parameters:
- First number: the first number to get the angle of.
- Second number: the second to get the angle of.
referenced by:
Add an amount of months to a datetime.
Parameters:
- Date: datetime to add the months to.
- Months: the amount of months to add.
::= ADD_MONTHS
referenced by:
Get the number of months between two dates. The fractional part is determined using a 30-day month length.
Parameters:
- Date 1: first date.
- Date 2: second date.
Introduced in 17.32.
referenced by:
Converts the base64_encoded value back to the binairy value as defined on Wikipedia.
Parameters:
- Input: value to convert back to the original.
referenced by:
Converts a binairy value to base64_encoded characters as defined on Wikipedia.
Parameters:
- Input: value to convert to base64 characters.
referenced by:
Extract file name from a file path.
Parameters:
- File path: Full path of a file, consisting of drive, folders, file name and possible extension.
- Extension: Optional extension to remove, including leading '.' when necessary.
referenced by:
Converts provided string to Camel case.
Parameters:
- Input: the string that will be converted to Camel case.
referenced by:
Rounds the input to the largest following integer. Unless an amount of decimals is defined, in which case it rounds to the largest integer number with the amount of decimals or date with the amount of positions.
Parameters:
- Input: A number or datetime to ceil.
- Decimals [optional]: A number to specify how many decimals it may ceil to in case of a number. In case of a datetime, it reflects the number of time positions, ranging from -2 for years to 2 for minutes.
referenced by:
Get a character from database character set.
Parameters:
- Input: a numeric value of a character.
referenced by:
Get a number which is a position of the first occurrence of a search string in another string, starting looking at an optional start location.
Parameters:
- Search string: Text to search for.
- Full string: String to be searched in.
- Start Position [optional]: Position of string to start searching.
referenced by:
Get the number of bits needed to represent a value. For a blob, this is the number of bits for the bytes of the blob. For all other data types, the value is first converted to a string and then the number of bits of the UTF8 representation is determined.
Parameters:
- Value: value to determine length in bits for.
::= BIT_LENGTH
referenced by:
Get the number of bytes needed to represent a value. For a blob, this is the number of bytes of the blob. For all other data types, the value is first converted to a string and then the number of bytes of the UTF8 representation is determined.
Parameters:
- Value: value to determine length in bytes for.
referenced by:
Get a concatenation of the text by a number of times.
Parameters:
- Text: text to repeat.
- Times: number of time to repeat the text.
referenced by:
referenced by:
referenced by:
referenced by:
Performs a coalescing operation.
Parameters:
- Left: an object.
- Right: an object.
referenced by:
Concatenate the left and right values together as a text. The values must all be either a BLOB or a type that can be cast to a string.
referenced by:
Concatenate a list of values together as a text or BLOB. The values must all be either a BLOB or a type that can be cast to a string.
::= CONCAT
referenced by:
Returns the cosine of the provided angle.
Parameters:
- Input: the angle to get the cosine of.
referenced by:
referenced by:
referenced by:
ConvertNumericToTime converts hours with decimal numbers to time
Parameters:
- Input: number to be converted to time.
- Show Seconds [optional]: accepts boolean input.
- Zero Change [optional]: when enabled and input is 0, shows Zero Text.
- Zero Text [optional]: text to display when input is 0 and Zero Change is true.
- Show Days [optional]: accepts boolean input.
referenced by:
Returns the CSV decoded input.
Parameters:
- Input: the input which will be decoded into CSV.
referenced by:
Returns the CSV encoded input.
Parameters:
- Input: the input which will be encoded into CSV.
referenced by:
::= UNCOMPRESS
referenced by:
Adds an amount of time to a date.
Parameters:
- Interval: the date interval to be added.
- Number: the number of intervals to add.
- Date: the date to wich the interval should be added.
referenced by:
Get the specified datepart from a datetime.
Parameters:
- datepart: a part of a date.
- date: a datetime to get the datepart from.
referenced by:
referenced by:
::= DATE_FLOOR
referenced by:
::= DATE_ROUND
referenced by:
::= DATE_TRUNC
referenced by:
Collect the day from a date.
Parameters:
- Input: A dateTime.
referenced by:
Collect the day of a week from a date.
Parameters:
- Input: A dateTime.
referenced by:
Collect the day of a year from a date.
Parameters:
- Input: A dateTime.
referenced by:
Decode function is a function expression if...then...then...else. The expression is compared with all search values. Upon first match, the function evaluates
to the result following the search. If no match found, the function evaluates to the optional default. When no default is present, the function evaluates to NULL.
Parameters:
- Expression: the value to compare with. Is automatically converted to the datatype of the first search value.
- Search[n]: if Expression is equal to Search, the function evaluates to the result.
- Result[n]: the value returned if expression equals to search
- Default: if no matches are found, default will be returned.
referenced by:
::= DENSE_RANK
referenced by:
referenced by:
referenced by:
Divide one number by the second number.
Parameters:
- first: a number to divide.
- second: a number to divide with.
referenced by:
referenced by:
Returns the provided number raised to the specified power.
Parameters:
- Input: the number to raise by the specified power.
no references
referenced by:
Get the number which in Excel represents the date. The leap year bug of Excel is included, so the day number increments by 2 between February 28, 1900 and March 1, 1900.
Parameters:
- Date: datetime to get number for.
Introduced in 17.32.
referenced by:
Rounds the input to the smallest following integer. Unless an amount of decimals is defined, in which case it rounds to the smallest integer with the amount of decimals or date with the amount of positions.
Parameters:
- Input: A number or datetime to floor.
- Decimals [optional]: A number to specify how many decimals it may floor to in case of a number. In case of a datetime, it reflects the number of time positions, ranging from -2 for years to 2 for minutes.
referenced by:
Get the date/time from an integer representing a UNIX epoch time.
Parameters:
- Input: An integer.
referenced by:
Compress a BLOB using the GZIP-algorithm.
Parameters:
- input: The BLOB to compress.
referenced by:
Cast a text in hexadecimal notation to the corresponding BLOB. Reverse of to_hex.
Parameters:
- Text: text in hexadecimal notation with two characters per byte.
referenced by:
Collect the hour from a date.
Parameters:
- Input: A dateTime.
referenced by:
::= HTMLDECODE
referenced by:
::= HTMLENCODE
referenced by:
Changes the first letter of each word in uppercase, all other letters in lowercase.
Parameters:
- Input: Text to convert.
referenced by:
Get a number which is a position of the first occurrence of substring in the string.
Parameters:
- String: String to be searched.
- Substring: Text to search for.
- StartPosition [optional]: Position of string to start searching.
- occurrence [optional]: Return the position of the occurrence.
referenced by:
Evaluates to boolean true when the value provided is a valid EU VAT account that is active and approved for cross-country use in the EU. Each validation takes approximately 250 ms and uses the VIES service offered by the European Union.
Parameters:
- Value: value to test.
referenced by:
Evaluates to text of the address linked to the EU VAT account provided (when the EU VAT account is valid and active).
Each validation takes approximately 250 ms and uses the VIES service offered by the European Union.
Parameters:
- Value: value to test.
referenced by:
Evaluates to text of the country linked to the EU VAT account provided (when the EU VAT account is valid and active).
Each validation takes approximately 250 ms and uses the VIES service offered by the European Union.
Parameters:
- Value: value to test.
referenced by:
Evaluates to text of the name linked to the EU VAT account provided (when the EU VAT account is valid and active).
Each validation takes approximately 250 ms and uses the VIES service offered by the European Union.
Parameters:
- Value: value to test.
referenced by:
Determines whether a value is a valid boolean. When true, the value can be converted by to_boolean.
Parameters:
- Input: value to convert.
::= IS_BOOLEAN
referenced by:
Determines whether a value is a valid number according to the specified format. When true, the value can be converted by to_date.
Parameters:
- Input: value to convert.
- Format: format mask. Optional; defaults to local style. List of format masks is available in a separate section.
referenced by:
Evaluates to boolean true when the value provided is a valid email address according to RFC 822.
Parameters:
- Email: email address.
- Allow plus sign: indicator whether the use of a plus sign ('+') to introduce an alias is allowed (defaults to false).
- Check TLD: indicator whether the TLD of the domain name must be validated against the current list (defaults to true).
referenced by:
Evaluates to boolean true when the value provided can be a valid EU VAT account using a fixed algorithm. The fixed algorithm accepts VAT numbers that are valid according
to the country's rules for assigning tax numbers. It does NOT check whether the VAT account is in use. Use `is_vies_eu_vat` to check whether it is actually active and approved for cross-country EU use.
Parameters:
- Value: value to test.
referenced by:
Evaluates to text expressing the reason why the value can not be a valid EU VAT account using a fixed algorithm.
Parameters:
- Value: value to test.
referenced by:
Determines whether a value is a valid GUID. When true, the value can be converted by to_guid.
Parameters:
- Input: value to convert.
referenced by:
Evaluates to boolean true when the value provided is a valid IBAN account.
Parameters:
- Value: value to test.
- Remove space: indicator whether spaces should be removed before IBAN validation.
referenced by:
Determines whether a value is a valid number according to the specified format. When true, the value can be converted by to_number.
Parameters:
- Input: value to convert.
- Format: format mask. Optional; defaults to local style. List of format masks is available in a separate section.
referenced by:
Evaluates to boolean true when the value provided is a valid phone number for the country indicated. In case no country is specified, an E.164 notation is required.
Parameters:
- Phone: phone number as text.
- Country: country code as ISO 3166-1 alpha-2 code.
referenced by:
referenced by:
::= JSONDECODE
referenced by:
referenced by:
::= JSONENCODE
referenced by:
Gets the number of characters in provided string.
Parameters:
- Input: the string to get the length of.
referenced by:
Determine the Levenshtein distance between two values as defined on Wikipedia.
referenced by:
Get the natural logarithm of a number.
Parameters:
- Input: a number to get the natural logarithm from.
referenced by:
Get the natural logarithm of a number in a specified base.
Parameters:
- Input: a number to get the natural logarithm from.
- Base [optional]: the base to get the natural logarithm from.
referenced by:
Converts provided string to lowercase.
Parameters:
- Input: the string that will be converted to lowercase.
referenced by:
Pad a string to the left to make it a specified length.
Parameters:
- Input: string to be padded.
- Lenght: the length the string should be padded to.
- Characters [optional]: Characters to pad with.
referenced by:
Trims characters from the left side of a string.
Parameters:
- Input: the string from to trim characters from the left side.
- (Optional) Characters to trim: all character(s) to trim. Default is " ".
referenced by:
Ltrimtext trims the left side of a text by a list of trim texts. When the text starts with a trim text, the trim text is removed.
Parameters:
- Text: text to be trimmed.
- Trim Text [optional]: text(s) we want to left trim from Text.
referenced by:
Converts a value to a 128-bit hash value as defined on Wikipedia.
Parameters:
- Input: Text to convert with MD5.
referenced by:
Converts a value to the Metaphone code as defined on Wikipedia.
Parameters:
- Input: value to convert to metaphone.
- Length: maximum output length of the given input.
referenced by:
::= METAPHONE3
referenced by:
referenced by:
Get the remainder of a divide calculation.
Parameters:
- dividend: a number.
- divider: a number.
referenced by:
Subtracts a value from another.
Parameters:
- Value: a number or datetime.
- Subtract: a number or datetime.
referenced by:
Collect the minute from a date.
Parameters:
- Input: A dateTime.
referenced by:
Collect the month from a date.
Parameters:
- Input: A dateTime.
referenced by:
referenced by:
Creates a new Guid id.
Returns: the new Guid id.
referenced by:
Coalesce all values together.
Returns: All values coalesced together.
referenced by:
Adding a value to another.
Parameters:
- Value: a number or datetime.
- add: a number or datetime.
referenced by:
Gets a value of a number raised to another.
Parameters:
- Value: a number.
- exponent: a number.
referenced by:
referenced by:
Evaluates to the phone number in E.164 notation format. In case no country is specified, an E.164 notation is required.
Parameters:
- Phone: phone number as text.
- Country: country code as ISO 3166-1 alpha-2 code.
referenced by:
Evaluates to the phone number in national notation format. In case no country is specified, an E.164 notation is required.
Parameters:
- Phone: phone number as text.
- Country: country code as ISO 3166-1 alpha-2 code.
referenced by:
phone_number_to_international:
Evaluates to the phone number in international notation format. In case no country is specified, an E.164 notation is required.
Parameters:
- Phone: phone number as text.
- Country: country code as ISO 3166-1 alpha-2 code.
referenced by:
Evaluates to the type of phone number: FIXED_LINE: fixed landline, FIXED_LINE_OR_MOBILE: either fixed landline phone or mobile number, MOBILE: mobile phone number, PAGER: pager, PERSONAL_NUMBER: personal phone number, PREMIUM_RATE: premium rate, SHARED_COST: shared cost, TOLL_FREE: toll-free, UAN: Universal Access Number; a phone number that can be accessed without dialing geographic area codes, UNKNOWN: unknown type, VOICEMAIL: voicemail, VOIP: VoIP. In case no country is specified, an E.164 notation is required.
Parameters:
- Phone: phone number as text.
- Country: country code as ISO 3166-1 alpha-2 code.
referenced by:
::= BARCODE_QR
referenced by:
referenced by:
Generates a random number between 0 and 1.
Parameters:
- Seed: Produce a repeatable sequence of random numbers each time that seed value is provided.
referenced by:
Generates a blob with pseudo-random values.
Parameters:
- Length: Produce a blob with this length in terms of bytes.
referenced by:
referenced by:
referenced by:
Extracts a substring from the given value using regular expression. The regular expression must follow the Microsoft.net regular expression language.
Parameters:
- Input: The text to get the substring from.
- Pattern: Regular expression pattern.
- Start position [optional]: The start index from the input.
- Appearance [optional]: Indicating the appearance of the substr operation.
- Match_parameter [optional]: A text literal that lets you change the default matching behavior of the function.
referenced by:
Determine the position of the regular expression in the given value. Returns 0 when the regular expression is not contained in the given value. The regular expression must follow the Microsoft.net regular expression language.
Parameters:
- Input: The text to get the regular expression position from.
- Pattern: Regular expression pattern.
- Start position [optional]: The start index from the input.
- Appearance [optional]: Indicating the appearance of the instr operation.
- ReturnOption [optional]: Select either the first character found or the first character after the occurrence of the pattern.
- Match_parameter [optional]: A text literal that lets you change the default matching behavior of the function.
referenced by:
Replaces all occurrences matching the regular expression with the replacement value. The replacement value may contain references to matches in the regular expression by using the dollar-sign ('$') plus the reference number. The regular expression must follow the Microsoft.net regular expression language.
Parameters:
- Input: The text to get the substring from.
- Pattern: Regular expression pattern.
- Replacement [optional]: Text to replace with.
- Start position [optional]: The start index from the input.
- Appearance [optional]: Indicating the appearance of the replace operation (use '0' for all appearances).
- Match_parameter [optional]: A text literal that lets you change the default matching behavior of the function. The available options are 'c' for case-sensitive, 'i' for ignore case, 'n' for single-line, 'm' for multi-line and 'x' for ignore pattern white space.
Returns: the input with every occurrence of the regular expression pattern replaced with the replacement.
referenced by:
Get the remainder of a divide calculation.
The REMAINDER function uses the round function in its formula, whereas the MOD function uses the floor function in its formula.
Parameters:
- Number1: a number.
- Number2: a number.
referenced by:
Changes a given text by replacing all occurrences from an ordered list of texts by their associated replacement texts.
Parameters:
- Input: the text to change.
- Old text 1: the text to be replaced.
- New text 1: the replacement text.
- Old text n: additional texts to be replaced.
- New text n: additional replacement texts.
referenced by:
Flips the input around.
Parameters:
- Input: text to flip around.
referenced by:
Rounds the input to the closest following integer. Unless an amount of decimals is defined, in which case it rounds to the closest integer number with the amount of decimals or date with the amount of positions.
Parameters:
- Input: A number or datetime to round.
- Decimals [optional]: A number to specify how many decimals it may round to in case of a number. In case of a datetime, it reflects the number of time positions, ranging from -2 for years to 2 for minutes.
referenced by:
::= ROW_NUMBER
referenced by:
Rightpad function pads the right-side of a string with a specific set of characters to the given length. When no set of characters given, it will pad with a whitespace.
Parameters:
- Input: Text to be padded.
- Length: The length to make the input to.
- Pad text [optional]: Text to add to the input if the length is larger then the input.
referenced by:
Trims characters from the right side of a string.
Parameters:
- Input: the string from which to trim characters from the right side.
- (Optional) Characters to trim: the character to trim. Default is " ".
referenced by:
Ltrimtext trims the right side of a text by a list of trim texts. When the text ends with a trim text, the trim text is removed.
Parameters:
- Text: text to be trimmed.
- Trim Text [optional]: text(s) we want to right trim from Text.
referenced by:
Collect the microsecond from a date.
Parameters:
- Input: A dateTime.
referenced by:
Collect the millisecond from a date.
Parameters:
- Input: A dateTime.
referenced by:
referenced by:
Normalize a file path by replacing all invalid and non-ASCII characters for use in a file path by underscore. After that, the file path is made more readable by various operations such as removal of duplicate whitespace and underscore characters.
Parameters:
- Original file path: path of the file.
- Maximum file name length: length in characters into which the normalized file name must fit.
- Allow path separator: whether to allow the path separator '\' in the normalized file name. When not, occurrences are replaced.
referenced by:
Collect the second from a date.
Parameters:
- Input: A dateTime.
referenced by:
Calculate hash value for parameter according to SHA1 algorithm.
Parameters:
- Value: text or BLOB to calculate SHA1 hash value for.
referenced by:
Calculate hash value for parameter according to SHA256 algorithm.
Parameters:
- Value: text or BLOB to calculate SHA256 hash value for.
referenced by:
Calculate hash value for parameter according to SHA384 algorithm.
Parameters:
- Value: text or BLOB to calculate SHA384 hash value for.
referenced by:
Calculate hash value for parameter according to SHA512 algorithm.
Parameters:
- Value: text or BLOB to calculate SHA512 hash value for.
referenced by:
Reduces a text to a maximum length. When the text exceeds the maximum length, text is removed from the middle by replacing the removed text by a shorter text.
Parameters:
- Input: Text to shorten.
- Length: Maximum length of the shortened text.
- Replacement: Text to replaced the removed text by; defaults to ellipsis ('...').
referenced by:
Converts a value to the Soundex code as defined on Wikipedia.
Parameters:
- Input: Text to that retrieve the soundex value from.
referenced by:
Returns the sign of a number, with -1 representing a negative number, 1 a positive number and 0 a zero.
Parameters:
- Input: A number.
referenced by:
Returns the sine of the provided angle.
Parameters:
- Input: the angle to get the sine of.
referenced by:
::= SPLIT_PART
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
Returns the square root of the provided number.
Parameters:
- Input: the number to get the square root of.
referenced by:
Gets a substring from the input.
Parameters:
- Input: text to gather the substring from.
- Start: start position.
- Lenght: maximum lenght of the substring.
referenced by:
Text value of a parameter associated with a context.
Parameters:
- context: a namespace.
- parameter: name of the parameter.
- (Optional) alias: name of the active data container to query. The most high ranked data container is queried when not specified or null.
- APPLICATION_VERSION: version of the client application.
- APPLICATION_VERSION1: first number of version of the client application.
- APPLICATION_VERSION2: first two numbers of version of the client application.
- APPLICATION_VERSION3: first three numbers of version of the client application.
- APPLICATION_FULL: name and version of the client application.
- APPLICATION_BUILD_EXPIRATION_DATE: build expiration date of the client application.
- AUTHENTICATION_METHOD: current authentication method.
- BILLING_ID: billing ID deviating from agreement code.
- BILLING_REFERENCE: additional billing reference.
- CLIENT_IP_ADDRESS_INTERNAL: internal IP address of the client device.
- CLIENT_IP_ADDRESS_EXTERNAL: external IP address of the client device.
- CLIENT_LOGICAL_CORE_COUNT: number of logical processor cores in the client device.
- CLIENT_MACHINE_NAME: machine name of the client device.
- CLIENT_SYSTEM_64_BIT: whether the OS is 64-bit on the client device.
- CLIENT_SYSTEM_NAME: full OS name running on the client device.
- CLIENT_SYSTEM_DIRECTORY: system directory of the client device.
- CLIENT_SYSTEM_PAGE_SIZE: system page size of the client device.
- CLIENT_VIRTUAL_MACHINE: whether the client device is a virtual machine.
- CLR_VERSION_BUILD: build version of the Common Language Runtime.
- CLR_VERSION_MAJOR: major version of the Common Language Runtime.
- CLR_VERSION_MAJOR_REVISION: major revision of the Common Language Runtime.
- CLR_VERSION_MINOR: minor version of the Common Language Runtime.
- CLR_VERSION_MIN_REVISION: minor revision of the Common Language Runtime.
- COMPANY_ID: ID of the company of current user.
- COMPANY_NAME: name of the company of current user.
- COMPANY_PHONE: phone of the company of current user.
- COMPANY_IBAN: IBAN of the company of current user.
- COMPANY_BIC: BIC of the company of current user.
- COMPANY_WEB_SITE: web site of the company of current user.
- COMPANY_STATE: state of the company of current user.
- COMPANY_POSTAL_CODE: postal code of the company of current user.
- COMPANY_COUNTRY: country of the company of current user.
- DATA_CONTAINER_ALIAS: alias of active data container.
- DATA_CONTAINER_ID: ID of active data container.
- DATABASE_DESCRIPTION: description of database.
- DATABASE_FULL_NAME: full name of database.
- DATABASE_VERSION: version of database.
- DIRECTORY_SEPARATOR: OS-specific separator for directory elements.
- DIRECTORY_SEPARATOR_ALT: Alternative OS-specific separator for directory elements.
- IIID: Invantive installation ID.
- IUID: Invantive user ID.
- LANG: ISO abbreviation for the language name of the user. Alternative: USER_LANGUAGE_CODE.
- MODULE: name of the client application. Alternative: APPLICATION_NAME.
- PATH_SEPARATOR: OS-specific separator for path.
- POOL_IDENTITY_ID: pool identity ID.
- PROCESS_64_BIT: whether the OS process on the client device runs as 64-bit.
- PROCESS_COMMAND_LINE: command line used to start the OS process.
- PROCESS_CURRENT_DIRECTORY: current directory of the OS process.
- PROCESS_STACK_TRACE: current stack trace of the OS process.
- PROCESS_WORKING_SET: working set of the OS process.
- PROVIDER_DESCRIPTION: description of active data container.
- PROVIDER_DOCUMENTATION_URL: documentation (URL) of active data container.
- PROVIDER_DOWNLOAD_IMPLEMENTATION_URL: download driver (URL) of active data container.
- PROVIDER_INSTANCE_UID: UID of active data container.
- PROVIDER_NAME: name of active data container.
- PROVIDER_SHORT_NAME: short name of active data container.
- PROVIDER_TECHNICAL_DOCUMENTATION_URL: technical documentation (URL) of active data container.
- SESSION_USER: log on code of the current user. Alternatives: CURRENT_USER and USER.
- SESSION_TIME_OFFSET_SEC: difference in seconds between UTC and the date/time representation of the session. A positive number means that the represented date/time of the session is higher than the UTC time of the date/time.
- SESSIONID: session ID of current session.
- UI_LANGUAGE_CODE: language code of the user interface.
- USER_DOMAIN_NAME: Windows domain name of current user.
- USER_EMAIL_ADDRESS: email address of current user.
- USER_FIRST_NAME: first name of current user.
- USER_FULL_NAME: full name of current user.
- USER_GENDER: gender of current user.
- USER_HOME_DIRECTORY: home directory of current user on client device.
- USER_INTERACTIVE: whether the current user works interactive.
- USER_PICTURES_DIRECTORY: pictures directory of current user on client device.
- USER_FAVORITES_DIRECTORY: favorites directory of current user on client device.
- USER_DESKTOP_DIRECTORY: desktop directory of current user on client device.
- USER_DOCUMENTS_DIRECTORY: documents directory of current user on client device.
- USER_PROFILE_DIRECTORY: profile directory of current user on client device.
- USER_LAST_LOG_ON: time of last log on of current user.
- USER_LAST_NAME: last name of current user.
- USER_LINKED_IN: LinkedIn name of current user.
- USER_MIDDLE_NAME: middle name of current user.
- USER_MOBILE_NUMBER: mobile number of current user.
- USER_NATIONALITY: nationality of current user.
- USER_PHONE_NUMBER: phone number of current user.
- USER_PICTURE_URL: picturel (URL) of current user.
- USER_SKYPE: Skype name of current user.
- USER_TITLE: title of current user.
- USER_TWITTER: Twitter name of current user.
- USER_WEB_SITE: personal web site of current user.
- VOLUME_SEPARATOR: OS-specific separator for volume.
referenced by:
Returns the tangent of the provided angle.
Parameters:
- Input: the angle to get the tangent of.
referenced by:
Multiplies one number by the second number.
Parameters:
- First: a number to multiply.
- Second: a number to multiply with.
referenced by:
Translate replaces all occurrences of each character in from_string to its corresponding character in to_string.
Parameters:
- input: The string to replace a sequence of characters with another set of characters.
- from_string: The string that will be searched for in the input.
- to_string: All characters in the from_string will be replaced with the corresponding character in the to_string
referenced by:
Replace all Invantive-style resources ('{res:...}') by their translation in the specified user interface language.
Parameters:
- txt: The string to replace resources in.
- language: ISO 639-1 language code. Optional; defaults to current user interface language.
referenced by:
Trims whitespaces from both sides of the provided string.
Parameters:
- Input: the string from which to trim characters.
referenced by:
Calculates the integral part of a number. Unless an amount of decimals is defined, in which case it calculates to the integer with the amount of decimals or date with the amount of positions.
Parameters:
- Input: A number or datetime to truncate.
- Decimals [optional]: A number to specify how many decimals it may truncate to in case of a number. In case of a datetime, it reflects the number of time positions, ranging from -2 for years to 2 for minutes.
referenced by:
Converts a value into a hexadecimal number.
Parameters:
- Input: value to convert.
referenced by:
Converts a value into a BLOB. Text is interpreted as UTF-16.
Parameters:
- Input: value to convert.
referenced by:
Converts a text with unicodes to regular characters.
Parameters:
- Input: text with unicodes.
referenced by:
Converts provided string to uppercase.
Parameters:
- Input: the string that will be converted to uppercase.
referenced by:
Decodes a url.
Parameters:
- Url: url to decode.
referenced by:
Encodes a url.
Parameters:
- Url: url to encode.
referenced by:
Get the UNIX epoch time of a date/time.
Parameters:
- Input: A dateTime. Current date/time is used when no value is specified.
referenced by:
Decompress a BLOB using the GZIP-algorithm.
Parameters:
- input: The BLOB to decompress.
referenced by:
referenced by:
Format a text as an XML comment.
Parameters:
- Input: the input which will be formatted as XML comment.
::= XMLCOMMENT
referenced by:
Returns the XML decoded input.
Parameters:
- Input: the input which will be decoded into XML.
referenced by:
Returns the XML encoded input.
Parameters:
- Input: the input which will be encoded into XML.
referenced by:
::= XMLELEMENT
referenced by:
Applies an XSL style sheet to the XML instance.
Parameters:
- XML: XML type instance to be transformed with the XSL style sheet.
- Style sheet: The XSL style sheet to apply.
referenced by:
Pretty-print xml text.
Parameters:
- Xml: xml to pretty-print.
referenced by:
Collects all data from the URL as binary data.
The URL must be publicly accessible. Use the NativePlatformScalarRequest view on cloud applications to directly access their web APIs.
Parameters:
- URL: the URL to collect the data from.
referenced by:
Collects all data from the URL as text.
The URL must be publicly accessible. Use the NativePlatformScalarRequest view on cloud applications to directly access their web APIs.
Parameters:
- URL: the URL to collect the data from.
- Encoding: the encoding from the data to receive, which is by default UTF8.
referenced by:
referenced by:
Collect the quarter from a date.
Parameters:
- Input: A dateTime.
referenced by:
referenced by:
referenced by:
referenced by:
Gets the user log on code.
Returns: the user log on code.
referenced by:
referenced by:
Collect the year from a date.
Parameters:
- Input: A dateTime.
referenced by:
referenced by:
Converts a value into a BLOB. Text is interpreted as UTF-8.
Parameters:
- Input: value to convert.
referenced by:
Converts a value into a boolean. Boolean expressions keep their value, for numbers 0 is considered false and all other values true and all dates are false. For texts, the following are considered true: 'true', 't', 'yes', 'y', 'on', '1', 'True', 'TRUE', 'T', 'YES', 'Y', 'ON'. False are 'false', 'f', 'no', 'n', 'off', '0', 'False', 'FALSE', 'F', 'NO', 'N' and 'OFF'. In all other scenarios, the default boolean conversion of Invantive holds.
Parameters:
- Input: value to convert.
::= TO_BOOLEAN
referenced by:
Converts a value into text.
Parameters:
- Input: value to convert.
- Format: format mask. Optional; defaults to American style. List of format masks is available in a separate section.
referenced by:
Converts a value into a datetime.
Parameters:
- Input: value to convert.
- Format: format mask. Optional; defaults to American style. List of format masks is available in a separate section.
referenced by:
Converts a value into a guid.
Parameters:
- Input: value to convert.
referenced by:
Converts a value into a number. An error is raised when the value can not be converted to a number.
Parameters:
- Input: value to convert.
- Format: format mask. Optional; defaults to local style. List of format masks is available in a separate section.
referenced by:
Generates a blob with 0-byte values.
Parameters:
- Length: Produce a blob with this length in terms of bytes.
referenced by:
Compress a BLOB using the Zlib-algorithm.
Parameters:
- input: The BLOB to compress.
referenced by:
referenced by:
The time of the system clock in local time at the device where Invantive UniversalSQL runs.
Returns: current date/time.
| SYSDATE
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
- catalogIdentifier
- column
- columnNoAlias
- createTableArgument
- csvTableColumnSpec
- dataContainerAlias
- excelTableColumnSpec
- fieldIdentifier
- htmlTableColumnSpec
- identifierWithMinus
- joinSet
- jsonTableColumnSpec
- namedExpression
- noJoinSet
- pSqlFunctionOrProcedureStatementNS
- pSqlPackageProcedureStatementNS
- pSqlVariableName
- parameterExpression
- partitionIdentifier
- partitionSimpleIdentifier
- roleIdentifier
- schemaIdentifier
- tableIdentifier
- triggerRecordVariableExpression
- xmlTableColumnSpec
referenced by:
referenced by:
::= ABS
| ABSENT
| ACOS
| AFFECTED
| AFTER
| APPLY
| APPROACH
| AREA
| ASC
| ASCII
| ASIN
| AT_C
| ATAN
| ATAN2
| ATTACH
| AUTO
| AVG
| BACKING
| BASE_64
| BASENAME
| BEFORE
| BEGIN
| BILLING
| BINARY
| BIT
| BY
| CACHE
| CAMEL
| CASE
| CEIL
| CHAR
| CHR
| COALESCE
| CODE
| COLUMN
| COLUMNS
| COMPARE
| COMPRESS
| CONTINUE
| CONTRACT
| COPY
| COS
| COUNT
| CROSS
| CSV
| CSVTABLE
| CURSOR
| DATA
| DATE
| DATEADD
| DATEPART
| DATETIME
| DEC
| DEPTH
| DESC
| DISABLE
| DOWNLOAD
| DOUBLE
| DROPPED
| ELEMENTS
| ELSE
| EMPTY_
| ENABLE
| END
| ERROR_
| ERRORS
| ESCAPED
| EXISTS
| EXIT
| EXCEL
| EXCLUDE
| EXP
| EXPLICIT
| FACTS
| FEED
| FILE
| FIRST
| FLOOR
| FOR
| FORCE
| FORMAT
| FRESH
| FULL
| FUNCTION
| GETDATE
| GOTO
| GROUP
| GZIP
| HEADERS
| HIDE
| HISTORY
| HTTPGET
| HTTPPOST
| ID
| IDENTITY
| IGNORE
| IMAGE
| IN
| INITCAP
| INCLUDE
| INCOMING
| INDEX
| INSTEAD
| INTEGER
| INTERVAL
| INVALID
| IS_DATE
| IS_EMAIL
| IS_GUID
| IS_IBAN
| IS_URI
| IUID
| JOIN_SET
| JSON
| LABEL
| LAST
| LEFT
| LENGTH
| LICENSE
| LIMIT
| LINE
| LINES
| LISTAGG
| LOAD
| LOGICAL
| LONGTEXT
| LOOP
| LOWER
| LOW_COST
| LPAD
| LTRIM
| MAINTAIN
| MAX
| MAXIMUM
| MD5
| MESSAGES
| METADATA
| MIN
| MINUS_C
| MOD
| MODEL
| MONEY
| MY
| NAME
| NATIVE
| NEW
| NEW_TIME
| NEWID
| NEXT
| NO
| NOWUTC
| NULLS
| NUMBER
| NVL
| OBSOLETE
| ODS
| OF
| OLD
| ONCE
| ORDER
| OUT
| OUTER
| OUTPUT
| OVERALL
| OVERVIEW
| PARALLEL
| PASSING
| PATH
| PER
| PIVOT
| POOL
| POSITION
| POSTFIX
| POWER
| PREFER
| PREFIX
| PRODUCT
| PSQL
| PURGE
| QUOTING
| RAISE
| RAND
| RANK
| RANDOM
| READY
| REFRESH
| REPEAT
| RESOLVE
| RETURN
| RETURNS
| REVERSE
| RIGHT
| ROLES
| ROOT
| ROUND
| ROW
| ROWS
| RPAD
| RTRIM
| SAMPLE
| SERIAL
| SESSION
| SHOW
| SIGN
| SIN
| SITE
| SITEMAP
| SKIP_
| SOUNDEX
| SOURCE
| SQL
| SQLERRM
| SQLCODE
| SQRT
| START
| STATE
| STAY
| STDDEV
| STEP
| STRING
| SUM
| SYNC
| TABLES
| TAN
| TEXT
| THEN
| TIME
| TINYTEXT
| TO
| TOKEN
| TOP
| TO_ARRAY
| TO_CHAR
| TO_DATE
| TO_GUID
| TO_HEX
| TRICKLE
| TRIGGERS
| TRIM
| TRUNC
| TYPE
| UNION
| UNISTR
| UNKNOWN
| UPDATE
| UPGRADE
| UPPER
| USE
| USER
| USING
| UTC
| UTC_DATE
| UUID
| VERSION
| VERSIONS
| WEBHOOKS
| WHEN
| WHILE
| WITHIN
| XML
| XMLDATA
| XMLTABLE
| XMLTYPE
| XSINIL
| YEAR
| ZIP
| LOG
| LN
| SECOND
| MINUTE
| HOUR
| INSTR
| DAY
| MONTH
| QUARTER
| YEAR
| CONCAT
| WITH
| EQUAL
| SHORTEN
| SUBSTR
referenced by:
referenced by:
A constant text value with varchar2 data type.
referenced by:
- allColumnsSpecColumnNamePostfix
- allColumnsSpecColumnNamePrefix
- allColumnsSpecLabelPostfix
- allColumnsSpecLabelPrefix
- alterDataDictionarySetBackingConnectionString
- alterPersistentCacheConfigureWebhooksStatement
- alterPersistentCacheDownloadStatement
- alterPersistentCacheDropStatement
- alterPersistentCachePurgeStatement
- alterPersistentCacheSetBackingConnectionString
- alterPersistentCacheSetDataContainerOptions
- alterPersistentCacheSetStatement
- alterPersistentCacheSetTableOptions
- beginTransactionStatement
- commitTransactionStatement
- constantExpression
- csvTableColumnSpec
- forCsvClause
- forJsonClause
- forXmlClause
- forXmlClauseCommonDirectives
- htmlTableColumnSpec
- htmlTableExpression
- intervalConstant
- jsonTableColumnSpec
- labeled
- pSqlExecuteNativeStatementNS
- partitionIdentifier
- resultSetName
- rollbackTransactionStatement
- xmlTableColumnSpec
referenced by:
A constant interval value, reflecting the time span between two dates. The string constant consists of an integer number and unit of time, taken from the following list:
- Millisecond,
- second,
- minute,
- hour,
- day,
- week, and
- year.
The units may be postfixed with an 's' without changing meaning, like 'years'.
Valid interval values are for example: "5 seconds", "20 hours" and "1 year". There is no support for combined intervals such as "30 minutes and 30 seconds".
referenced by:
- alterPersistentCachePartitionRefreshStatement
- alterPersistentCacheRefreshStatement
- alterPersistentCacheTableRefreshStatement
- constantExpression
- httpDiskCache
- httpMemoryCache
- ods
A constant numeric value with numeric data type.
referenced by:
- alterPersistentCacheDownloadStatement
- alterPersistentCachePartitionRefreshStatement
- alterPersistentCacheRefreshStatement
- alterPersistentCacheSetStatement
- alterPersistentCacheTableRefreshStatement
- constantExpression
- createTableStatement
- csvTableColumnSpec
- excelTableColumnSpec
- excelTableOptions
- forCsvClause
- forJsonClause
- insertStatement
- internetTableOptions
- joinSet
- limitClause
- partitionIdentifier
- partitionSimpleIdentifier
- sqlDataTypeExtended
- synchronizeMaxRowsStatement
- synchronizeStatement
- topClause
referenced by:
- alterPersistentCacheSetStatement
- alterPersistentCacheSetTableOptions
- constantExpression
- httpDiskCache
- httpMemoryCache
- lowCost
- ods
The "unknown" value null.
referenced by:
The PSQL create function statement creates a PSQL block that executes the code and returns a value to the caller using the 'return' statement. An error is raised when a function, procedure or package with the same name already exists. When 'or replace' is specified, a previously existing function with the same name is dropped before creating the new function.
referenced by:
referenced by:
referenced by:
referenced by:
The PSQL create function statement creates a PSQL block that executes code when called. An error is raised when a function, procedure or package with the same name already exists. When 'or replace' is specified, a previously existing procedure with the same name is dropped before creating the new procedure.
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
referenced by:
A PSQL block is a structure to define procedural logic. It can contain both procedural logic as well as SQL statements like "select".
referenced by:
referenced by:
referenced by:
A PSQL declare section defines one or more local variables, which are available in the block and nested blocks.
referenced by:
referenced by:
An item declaration defines one named variable, based upon data type. The initial value can be added as a constant.
referenced by:
referenced by:
A PSQL body contains the procedural logic as well as SQL statements. Variables must have been declared beforehand.
referenced by:
referenced by:
referenced by:
referenced by:
A number of basic PSQL statements are available.
referenced by:
- pSqlBlockOrStatement
- pSqlBodyNoBatchSeparator
- pSqlExceptionWhenName
- pSqlExceptionWhenOthers
- sqlOrPSqlStatement
A PSQL block or statement defines a procedural step or a SQL statement to be executed.
referenced by:
referenced by:
- pSqlElsIfExpression
- pSqlForNumberLoopStatement
- pSqlForRecordLoopStatement
- pSqlIfStatement
- pSqlRepeatUntilLoopStatement
- pSqlWhileLoopStatement
The null-statement is a NOP-statement (No Operator). The use of the null-statement is necessary when a PSQL statement is needed, but no activity needs to be performed such as with an if statement. The null-statement also makes explicit that a developer has considered the actions needed and found that no action applies to a specific scenario. This leads to improved code documentation.
referenced by:
Return a value from a function to the calling code.
referenced by:
The assignment statement assign a new value to a variable. To assign the results of a SQL query to a value, use a select ... into ... statement.
referenced by:
This PSQL continue statement continues execution of the innermost loop at the start of the loop. When a boolean expression is specified, the innermost loop is only started at the start when the boolean expression evaluates to true.
referenced by:
pSqlExecuteImmediateStatement:
The execute immediate PSQL statement enables the use of SQL statements that are compiled at runtime. For instance dynamic DDL statements can not always be executed on compiled time and the execute immediate enables these.
referenced by:
pSqlExecuteImmediateStatementNS:
referenced by:
pSqlExecuteImmediateUsingClause:
referenced by:
pSqlExecuteImmediateUsingItem:
referenced by:
Execute a statement directly on a specific data container. The statement must match the native SQL or API code for the data container specified. Output of the statement is retrieved into PSQL variables.
referenced by:
referenced by:
This PSQL exit statement exits execution of the innermost loop. When a boolean expression is specified, the innermost loop is only exited when the boolean expression evaluates to true.
referenced by:
Continue execution at the location of the specified label.
referenced by:
Specificy location of a label.
referenced by:
The if-statement performs conditional logic. When the boolean expression after if holds, the PSQL block after the 'then' will be executed. Other branches can be specified using an elsif. Otherwise, and only when specified, the logic after the else is executed.
referenced by:
referenced by:
A variety of PSQL statements for loops are available.
referenced by:
This PSQL integer loop statement iterates over a range of integer values, executing PSQL statements for each iterated value. The iterations goes from the first value to the last value in increments of specified step size. Default step size is 1. The iterations go backward in decrements of 1 when 'reverse' is specified.
::= FOR pSqlVariableName IN REVERSE? ( expression | pSqlVariableName ) DOT DOT ( expression | pSqlVariableName ) ( STEP ( expression | pSqlVariableName ) )? LOOP pSqlBlockOrStatements END LOOP BATCHSEPARATOR
referenced by:
This PSQL result set loop statement iterates over a result set returned by an Invantive UniversalSQL query. The PSQL statements are executed for each record. The record's specific values can be retrieved using the variable.
referenced by:
This PSQL loop statement (or 'repeat until') executes PSQL statements as long as the specified loop is not terminated using an 'exit' statement.
referenced by:
This PSQL while loop statement executes PSQL statements as long as the specified boolean condition evaluates to true at loop end.
referenced by:
The raise statement raises one from a list of pre-defined exceptions:
- INVALID_NUMBER: typically used when a string is converted to a number, but does not contain a valid number.
- NO_DATA_FOUND: typically used when a 'select into' statement returns zero rows.
- PROGRAM_ERROR: typically used to signal an (internal) system error.
- TOO_MANY_ROWS: typically used when a 'select into' statement returns two or more rows.
- VALUE_ERROR: typically used when during evaluation of an expression an error occurs.
- ZERO_DIVIDE: typically used when the right-hand side of a divide is zero.
The raise_application_error statement raises an exception, which consists of a non-unique text or numeric identification plus a message text. Optionally, you can add a kind request and the natural key as parameters.
referenced by:
pSqlFunctionOrProcedureStatement:
referenced by:
pSqlFunctionOrProcedureStatementNS:
referenced by:
pSqlPackageProcedureStatement:
referenced by:
pSqlPackageProcedureStatementNS:
referenced by:
referenced by:
- pSqlArgument
- pSqlAssignmentStatement
- pSqlDropFunction
- pSqlDropProcedure
- pSqlExceptionWhenName
- pSqlExecuteImmediateUsingItem
- pSqlForNumberLoopStatement
- pSqlForRecordLoopStatement
- pSqlFunctionSpec
- pSqlGotoStatement
- pSqlItemDeclaration
- pSqlLabelStatement
- pSqlProcedureSpec
- pSqlVariableList
::= PLS_INTEGER
| CURSOR
referenced by:
EOF ::= $
referenced by:
Contact
- Invantive ® BV
- Biesteweg 11
- 3849 RDHierden
- Nederland
- Sales: +31 88 00 26 500
- Support: forums.invantive.com
- Office hours: 09:00 - 17:00 CET
- E-mail: sales@invantive.com
- Web: invantive.com
- Report security incident
- Telephone: +31 88 00 26 598
- Email: security@invantive.com
- More information
- Information for use in Outlook
- Chamber of Commerce : 130 31 406
- Managing Director: Guido Leenders
- Company domiciled in Roermond.
- VAT: NL812602377B01
- Founded:
- 2012 NAICS: 511210
- Bank: IBAN NL25 BUNQ 2098 2586 07, BIC BUNQNL2A