Home
PostgreSQL Programmer`s Guide The
Contents
1. should PQclear PGresult whenever it is no longer needed to avoid memory leaks Poclear res while 1 wait a little bit between checks waiting with select would be more efficient sleep 1 collect any asynchronous backend messages POconsume Input conn check for asynchronous notify messages while notify PQnotifies conn NULL fprintf stderr ASYNC NOTIFY of s from backend pid 3 received n notify gt relname notify gt be pid free notify close the connection to the database and cleanup PQfinish conn Sample Program 3 testlibpq3 c Test the C version of Libpq the Postgres frontend library tests the binary cursor interface populate a database by doing the following CREATE TABLE testl i int4 d float4 p polygon INSERT INTO test1 values 1 3 567 3 0 4 0 1 0 2 0 polygon INSERT INTO test1 values 2 89 05 4 0 3 0 2 0 1 0 polygon 134 Chapter 16 libpq the expected output is tuple 0 got i 4 bytes 1 d 4 bytes 3 567000 p 4 bytes 2 points boundbox hi 3 000000 4 000000 lo 1 000000 2 000000 tuple 1 got i 4 bytes 2 d 4 bytes 89 050003 p 4 bytes 2 points boundbox hi 4 000000 3 000000 lo 2 000000 1 000000 include lt stdio h gt include libpq fe h include utils geo decls h for t
2. answer 1 Notice that we defined a target list for the function with the name RESULT but the target list of the query that invoked the function overrode the function s target list Hence the result is labelled answer instead of one It s almost as easy to define SQL functions that take base types as arguments In the example below notice how we refer to the arguments within the function as 1 and 2 CREATE FUNCTION add_em int4 int4 RETURNS int4 AS SELECT 1 2 LANGUAGE sql SELECT add_em 1 2 AS answer answer 3 11 Chapter 4 Extending SOL Functions SQL Functions on Composite Types When specifying functions with arguments of composite types such as EMP we must not only specify which argument we want as we did above with 1 and 2 but also the attributes of that argument For example take the function double_salary that computes what your salary would be if it were doubled CREATE FUNCTION double salary EMP RETURNS int4 AS SELECT 1 salary 2 AS salary LANGUAGE sql SELECT name double salary EMP AS dream FROM EMP WHERE EMP cubicle 2 1 point Notice the use of the syntax 1 salary Before launching into the subject of functions that return composite types we must first introduce the function notation for projecting attributes The simple way to explain this is that we can usually use the notatio
3. The filename argument specifies the UNIX pathname of the file to be imported as a large object Exporting a Large Object To export a large object into UNIX file call int lo export PGconn conn Oid lobjld text filename The lobjId argument specifies the Oid of the large object to export and the filename argument specifies the UNIX pathname of the file Opening an Existing Large Object To open an existing large object call int lo open PGconn conn Oid lobjld int mode The lobjId argument specifies the Oid of the large object to open The mode bits control whether the object is opened for reading INV_READ writing or both A large object cannot be opened before it is created lo_open returns a large object descriptor for later use in lo_read lo_write lo_lseek lo_tell and lo_close Writing Data to a Large Object The routine int lo write PGconn conn int fd char buf int len 111 Chapter 15 Large Objects writes len bytes from buf to large object fd The fd argument must have been returned by a previous lo_open The number of bytes actually written is returned In the event of an error the return value is negative Seeking on a Large Object To change the current read or write location on a large object call int lo lseek PGconn conn int fd int offset int whence This routine moves the current location pointer for the large object described by fd to the new location specified by offset The valid
4. Chapter 16 libpq PQerrorMessage Returns the error message most recently generated by an operation on the connection char PQerrorMessage PGconn conn Nearly all libpq functions will set PQerrorMessage if they fail Note that by libpq convention a non empty PQerrorMessage will include a trailing newline PQbackendPID Returns the process ID of the backend server handling this connection int PQbackendPID PGconn conn The backend PID is useful for debugging purposes and for comparison to NOTIFY messages which include the PID of the notifying backend Note that the PID belongs to a process executing on the database server host not the local host Query Execution Functions Once a connection to a database server has been successfully established the functions described here are used to perform SQL queries and commands PQexec Submit a query to Postgres and wait for the result PGresult PQexec PGconn conn const char query Returns a PGresult pointer or possibly a NULL pointer A non NULL pointer will generally be returned except in out of memory conditions or serious errors such as inability to send the query to the backend If a NULL is returned it should be treated like a PGRES_FATAL_ERROR result Use PQerrorMessage to get more information about the error The PGresult structure encapsulates the query result returned by the backend libpq application programmers should be careful to maintain the PGresult abstract
5. Chapter 21 JDBC Interface Author Written by Peter T Mount peter retep org uk the author of the JDBC driver JDBC is a core API of Java 1 1 and later It provides a standard set of interfaces to SQL compliant databases Postgres provides a type 4 JDBC Driver Type 4 indicates that the driver is written in Pure Java and communicates in the databases own network protocol Because of this the driver is platform independent Once compiled the driver can be used on any platform Building the JDBC Interface Compiling the Driver The driver s source is located in the src interfaces jdbc directory of the source tree To compile simply change directory to that directory and type o make Upon completion you will find the archive postgresql jar in the current directory This is the JDBC driver Note You must use make not javac as the driver uses some dynamic loading techniques for performance reasons and javac cannot cope The Makefile will generate the jar archive Installing the Driver To use the driver the jar archive postgresql jar needs to be included in the CLASSPATH Example I have an application that uses the JDBC driver to access a large database containing astronomical objects I have the application and the jdbc driver installed in the usr local lib directory and the java jdk installed in usr local jdk1 1 6 To run the application I would use export CLASSPATH usr local lib finder jar usr local
6. cd doc src sgml make tutorial rtf oe oe 2 Openanew document in Applix Words and then import the RTF file 3 Print out the existing Table of Contents to mark up in the following few steps 4 nsert figures into the document Center each figure on the page using the centering margins button Not all documents have figures You can grep the SGML source files for the string Graphic to identify those parts of the documentation which may have figures A few figures are replicated in various parts of the documentation 5 Work through the document adjusting page breaks and table column widths 6 Ifa bibliography is present Applix Words seems to mark all remaining text after the first title as having an underlined attribute Select all remaining text turn off underlining using the underlining button then explicitly underline each document and book title 7 Work through the document marking up the ToC hardcopy with the actual page number of each ToC entry 8 Replace the right justified incorrect page numbers in the ToC with correct values This only takes a few minutes per document 9 Save the document as native Applix Words format to allow easier last minute editing later 10 Export the document to a file in Postscript format 11 Compress the Postscript file using gzip Place the compressed file into the doc directory Toolsets We have documented experience with two installation methods for the various tools that are nee
7. i lt ntups incr i lappend datnames pg result res getTuple i pg result res clear pg disconnect conn return datnames pgtcl Command Reference Information pg connect Name pg connect opens a connection to the backend server Synopsis pg connect conninfo connectOptions pg connect dbName host hostName port portNumber tty pattyl 147 Chapter 18 pgtcl options optionalBackendArgs Inputs new style connectOptions A string of connection options each written in the form keyword value Inputs old style dbName Specifies a valid database name host hostName Specifies the domain name of the backend server for dbName port portNumber Specifies the IP port number of the backend server for dbName tty patty Specifies file or tty for optional debug output from backend options optionalBackendArgs Specifies options for the backend server for dbName Outputs dbHandle If successful a handle for a database connection is returned Handles start with the prefix pesql Description pg_connect opens a connection to the Postgres backend Two syntaxes are available In the older one each possible option has a separate option switch in the pg_connect statement In the newer form a single option string is supplied that can contain multiple option values See pg_conndefaults for info about the available options in the newer syntax Usage XXX thomas 1997 12 24 14
8. AuthenticationUnencryptedPassword B Bytel CR Identifies the message as an authentication request Int32 3 Specifies that an unencrypted password is required AuthenticationEncryptedPassword B Bytel CR Identifies the message as an authentication request Int32 4 Specifies that an encrypted password is required Byte2 The salt to use when encrypting the password BackendKeyData B Bytel K Identifies the message as cancellation key data The frontend must save these values if it wishes to be able to issue CancelRequest messages later 210 Chapter 25 Frontend Backend Protocol Int32 The process ID of this backend Int32 The secret key of this backend BinaryRow B Bytel B Identifies the message as a binary data row A prior RowDescription message defines the number of fields in the row and their data types Byten A bit map with one bit for each field in the row The 1st field corresponds to bit 7 MSB of the Ist byte the 2nd field corresponds to bit 6 of the 1st byte the 8th field corresponds to bit O LSB of the 1st byte the 9th field corresponds to bit 7 of the 2nd byte and so on Each bit is set if the value of the corresponding field is not NULL If the number of fields is not a multiple of 8 the remainder of the last byte in the bit map is wasted Then for each field with a non NULL value there is the following Int32 Specifies the size of the value of the fie
9. The default location for libraries and headers for the standalone installation is usr local lib and usr local include iodbc respectively There is another system wide configuration file that gets installed as share odbcinst ini if share exists or as etc odbcinst ini if share does not exist Note Installation of files into share or etc requires system root privileges Most installation steps for Postgres do not have this requirement and you can choose another destination which is writable by your non root Postgres superuser account instead 1 The standalone installation distribution can be built from the Postgres distribution or may be obtained from Insight Distributors http www insightdist com psqlodbc the current maintainers of the non Unix sources Copy the zip or gzipped tarfile to an empty directory If using the zip package unzip it with the command unzip a packagename The a option is necessary to get rid of DOS CR LF pairs in the source files 177 Chapter 20 ODBC Interface If you have the gzipped tar package than simply run tar xzf packagename a To create a tar file for a complete standalone installation from the main Postgres source tree Configure the main Postgres distribution Create the tar file cd interfaces odbc make standalone 9 KJ 9 KJ Copy the output tar file to your target system Be sure to transfer as a binary file if using ftp Unpack the tar file into a clean director
10. VARDATA new t destination void VARDATA t source VARSIZE t VARHDRSZ how many bytes return new t text concat text text argl text arg2 int32 new text size VARSIZE argl VARSIZE arg2 VARHDRSZ text new text text palloc new text size memset void new text 0 new text size VARSIZE new text new text size strncpy VARDATA new text VARDATA argl VARSIZE arg1 VARHDRSZ strncat VARDATA new text VARDATA arg2 VARSIZE arg2 VARHDRSZ return new text On OSF 1 we would type CREATE FUNCTION add one int4 RETURNS int4 AS PGROOT tutorial funcs so LANGUAGE c CREATE FUNCTION makepoint point point RETURNS point AS PGROOT tutorial funcs so LANGUAGE c CREATE FUNCTION concat text text text RETURNS text AS PGROOT tutorial funcs so LANGUAGE c CREATE FUNCTION copytext text RETURNS text AS PGROOT tutorial funcs so LANGUAGE c On other systems we might have to make the filename end in sl to indicate that it s a shared library Programming Language Functions on Composite Types Composite types do not have a fixed layout like C structures Instances of a composite type may contain null fields In addition composite types that are part of an inheritance hierarchy may have different fields than other members of the same inheritance hierarchy Therefore Postgres provides a procedural interface for accessing field
11. hi f f lo n 136 Chapter 16 libpq POgetlength res i d fnum pval npts pval boundbox xh pval boundbox yh pval gt boundbox xl pval boundbox yl Poclear res close the cursor res PQexec conn CLOSE mycursor Poclear res commit the transaction res PQexec conn COMMIT Poclear res close the connection to the database and cleanup PQfinish conn 137 Chapter 17 libpq C Binding libpq is the C API to Postgres libpq is a set of classes which allow client programs to connect to the Postgres backend server These connections come in two forms a Database Class and a Large Object class The Database Class is intended for manipulating a database You can send all sorts of SQL queries to the Postgres backend server and retrieve the responses of the server The Large Object Class is intended for manipulating a large object in a database Although a Large Object instance can send normal queries to the Postgres backend server it is only intended for simple queries that do not return any data A large object should be seen as a file stream In the future it should behave much like the C file streams cin cout and cerr This chapter is based on the documentation for the libpq C library Three short programs are listed at the end of this section as examples of libpq programming though not necessarily of good programming There are several examples o
12. 40 0 inch al bundy INSERT INTO shoe data VALUES al bundy sh3 4 brown 50 0 65 0 cm al bundy INSERT INTO shoe data VALUES al_bundy gt sh4 3 brown 40 0 50 0 inch al_bundy gt al_bundy gt INSERT INTO shoelace data VALUES al_bundy gt s11 5 black 80 0 cm al_bundy gt INSERT INTO shoelace data VALUES al_bundy gt S12 6 black 100 0 cm al bundy INSERT INTO shoelace data VALUES al_bundy gt s13 0 black 35 0 inch al bundy INSERT INTO shoelace data VALUES al_bundy gt sl4 8 black 40 0 inch al bundy INSERT INTO shoelace data VALUES al_bundy gt s15 4 brown 1 0 m al bundy INSERT INTO shoelace data VALUES al_bundy gt s16 0 brown 0 9 m al bundy INSERT INTO shoelace data VALUES al_bundy gt sl7 7 brown 60 cm al bundy INSERT INTO shoelace data VALUES al_bundy gt s18 1 brown 40 inch al_bundy gt al_bundy gt SELECT FROM shoelace Sl name sl avail sl color sl len sl unit sl len cm Sinise gt sll 5 black 80 cm 80 s12 6 black 100 cm 100 s17 7 brown 60 cm 60 s13 0 black 35 inch 88 9 sl4 8 black 40 inch 101 6 s18 1 brown 40 inch 101 6 s15 4 brown 1 m 100 sl6 0 brown 0 9 m 90 8 rows It s the simplest SELECT Al can do on our views so we take this to explain
13. AS PGROOT tutorial obj complex so LANGUAGE c Now define the operators that use them As noted the operator names must be unique among all operators that take two int4 operands In order to see if the operator names listed below are taken we can do a query on pg operator this query uses the regular expression operator to find three character operator names that end in the character amp E SELECT FROM pg_operator WHERE oprname amp text to see if your name is taken for the types you want The important things here are the procedure which are the C functions defined above and the restriction and join selectivity functions You should just use the ones used below note that there are different such functions for the less than equal and greater than cases These must be supplied or the access method 55 Chapter 9 Interfacing Extensions To Indices will crash when it tries to use the operator You should copy the names for restrict and join but use the procedure names you defined in the last step CREATE OPERATOR leftarg complex abs rightarg complex abs procedure complex abs eq restrict eqsel join eqjoinsel Notice that five operators corresponding to less less equal equal greater and greater equal are defined We re just about finished the last thing we need to do is to update the pg_amop relation To do this we need the following attributes Table
14. As described in the section about the preprocessor every input variable gets ten arguments These variables are filled by the function ECPGt_EORT An enum telling that there are no more variables All the SQL statements are performed in one transaction unless you issue a commit transaction To get this auto transaction going the first statement or the first after statement after a commit or rollback always begins a transaction To disable this feature per default use the t option on the commandline To be completed entries describing the other entries 174 Chapter 20 ODBC Interface Note Background information originally by Tim Goeke mailto tgoeke xpressway com ODBC Open Database Connectivity is an abstract API which allows you to write applications which can interoperate with various RDBMS servers ODBC provides a product neutral interface between frontend applications and database servers allowing a user or developer to write applications which are transportable between servers from different manufacturers Background The ODBC API matches up on the backend to an ODBC compatible data source This could be anything from a text file to an Oracle or Postgres RDBMS The backend access come from ODBC drivers or vendor specifc drivers that allow data access psqlODBC is such a driver along with others that are available such as the OpenLink ODBC drivers Once you write an ODBC application you should be able to connect
15. Complex complex char result if complex NULL return NULL result char palloc 60 18 Chapter 5 Extending SOL Types sprintf result g g complex gt x complex gt y return result You should try to make the input and output functions inverses of each other If you do not you will have severe problems when you need to dump your data into a file and then read it back in say into someone else s database on another computer This is a particularly common problem when floating point numbers are involved To define the complex type we need to create the two user defined functions complex_in and complex_out before creating the type CREATE FUNCTION complex in opaque RETURNS complex AS PGROOT tutorial obj complex so LANGUAGE c CREATE FUNCTION complex out opaque RETURNS opaque AS PGROOT tutorial obj complex so LANGUAGE c CREATE TYPE complex internallength 16 input complex in output complex out m As discussed earlier Postgres fully supports arrays of base types Additionally Postgres supports arrays of user defined types as well When you define a type Postgres automatically provides support for arrays of that type For historical reasons the array type has the same name as the user defined type with the underscore character _ prepended Composite types do not need any function defined on them since the system already understands what they look lik
16. ISPGROOT src backend port PORTNAME ISPGROOT src backend obj where PORTNAME is the name of the port e g alpha or sparc When allocating memory use the Postgres routines palloc and pfree instead of the corresponding C library routines malloc and free The memory allocated by palloc will be freed automatically at the end of each transaction preventing memory leaks Always zero the bytes of your structures using memset or bzero Several routines such as the hash access method hash join and the sort algorithm compute functions of the raw bits contained in your structure Even if you initialize all fields of your structure there may be several bytes of alignment padding holes in the structure that may contain garbage values Most of the internal Postgres types are declared in postgres h so it s a good idea to always include that file as well Including postgres h will also include elog h and palloc h for you Compiling and loading your object code so that it can be dynamically loaded into Postgres always requires special flags See Appendix A for a detailed explanation of how to do it for your particular operating system 17 Chapter 5 Extending SQL Types As previously mentioned there are two kinds of types in Postgres base types defined in a programming language and composite types instances Examples in this section up to interfacing indices can be found in complex sql and complex c Composite examples are in funcs
17. In practice this means that the join operator must behave like equality But unlike hashjoin where the left and right data types had better be the same or at least bitwise equivalent it is possible to mergejoin two distinct data types so long as they are logically compatible For example the int2 versus int4 equality operator is mergejoinable We only need sorting operators that will bring both datatypes into a logically compatible sequence When specifying merge sort operators the current operator and both referenced operators must return boolean the SORTI operator must have both input datatypes equal to the current operator s left argument type and the SORT2 operator must have both input datatypes equal to the current operator s right argument type As with COMMUTATOR and NEGATOR this means that the operator name is sufficient to specify the operator and the system is able to make dummy operator entries if you happen to define the equality operator before the other ones In practice you should only write SORT clauses for an operator and the two referenced operators should always be named lt Trying to use merge join with operators named anything else will result in hopeless confusion for reasons we ll see in a moment There are additional restrictions on operators that you mark mergejoinable These restrictions are not currently checked by CREATE OPERATOR but a merge join may fail at runtime if any are not true
18. PgConnection Notifies See PQnotifies for details Query Execution Functions Exec Sends a query to the backend server It s probably more desirable to use one of the next two functions ExecStatusType PgConnection Exec const char query 140 Chapter 17 libpg C Binding Returns the result of the query The following status results can be expected PGRES_EMPTY_QUERY PGRES COMMAND OK if the query was a command PGRES TUPLES OK if the query successfully returned tuples PGRES COPY OUT PGRES COPY IN PGRES BAD RESPONSE if an unexpected response was received PGRES NONFATAL ERROR PGRES FATAL ERROR ExecCommandOk Sends a command query to the backend server int PgConnection ExecCommandOk const char query Returns TRUE if the command query succeeds ExecTuplesOk Sends a command query to the backend server int PgConnection ExecTuples0Ok const char query Returns TRUE if the command query succeeds and there are tuples to be retrieved ErrorMessage Returns the last error message text const char PgConnection ErrorMessage Tuples Returns the number of tuples instances in the query result int PgDatabase Tuples Fields Returns the number of fields attributes in each tuple of the query result int PgDatabase Fields FieldName Returns the field attribute name associated with the given field index Field indices start at 0 const char PgDatabase FieldName int field num FieldNum PQ
19. database application domains that involve the need for extensive queries such as artificial intelligence The Institute of Automatic Control at the University of Mining and Technology in Freiberg Germany encountered the described problems as its folks wanted to take the Postgres DBMS as the backend for a decision support knowledge based system for the maintenance of an electrical power grid The DBMS needed to handle large join queries for the inference machine of the knowledge based system Performance difficulties within exploring the space of possible query plans arose the demand for a new optimization technique being developed In the following we propose the implementation of a Genetic Algorithm as an option for the database query optimization problem Genetic Algorithms GA The GA is a heuristic optimization method which operates through determined randomized search The set of possible solutions for the optimization problem is considered as a population of individuals The degree of adaption of an individual to its environment is specified by its fitness The coordinates of an individual in the search space are represented by chromosomes in essence a set of character strings A gene is a subsection of a chromosome which encodes the value of a single parameter being optimized Typical encodings for a gene could be binary or integer Through simulation of the evolutionary operations recombination mutation and selection new ge
20. delete 1 update 1 fetch 1 or a select 1 command If the transaction has been aborted then the backend sends a CompletedResponse message with a tag of ABORT STATE Otherwise the following responses are sent For an insert I command the backend then sends a CompletedResponse message with a tag of INSERT oid rows where rows is the number of rows inserted and oid is the object ID of the inserted row if rows is 1 otherwise oid is 0 205 Chapter 25 Frontend Backend Protocol For a delete 1 command the backend then sends a CompletedResponse message with a tag of DELETE rows where rows is the number of rows deleted For an update 1 command the backend then sends a CompletedResponse message with a tag of UPDATE rows where rows is the number of rows deleted For a fetch l or select 1 command the backend sends a RowDescription message This 1s then followed by an AsciiRow or BinaryRow message depending on whether a binary cursor was specified for each row being returned to the frontend Finally the backend sends a CompletedResponse message with a tag of SELECT EmptyQueryResponse An empty query string was recognized The need to specially distinguish this case is historical ErrorResponse An error has occurred ReadyForQuery Processing of the query string is complete A separate message is sent to indicate this because the query string may contain multiple SQL commands CompletedResponse marks the end o
21. http www ora com homepages dtdparse docbook 3 0 provides a powerful cross reference for features of DocBook This documentation set is constructed using several tools including James Clark s jade http www jclark com jade and Norm Walsh s Modular DocBook Stylesheets http nwalsh com docbook dsssl Currently hardcopy is produced by importing Rich Text Format RTF output from jade into ApplixWare for minor formatting fixups then exporting as a Postscript file TeX http sunsite unc edu pub packages TeX systems unix is a supported format for jade output but was not used at this time for several reasons including the inability to make minor format fixes before committing to hardcopy and generally inadequate table support in the TeX stylesheets Documentation Sources Documentation sources include plain text files man pages and html However most new Postgres documentation will be written using the Standard Generalized Markup Language SGML DocBook http www ora com davenport Document Type Definition DTD Much of the existing documentation has been or will be converted to SGML The purpose of SGML is to allow an author to specify the structure and content of a document e g using the DocBook DTD and to have the document style define how that content is rendered into a final form e g using Norm Walsh s stylesheets Documentation has accumulated from several sources As we integrate and assimilate existin
22. in filename into database as large object Oid importFile PGconn conn char filename voi Oid lobjld int lobj fd char buf BUFSIZE int nbytes tmp int fd open the file to be read in fd open filename O RDONLY 0666 if fd lt 0 error fprintf stderr can t open unix file s n filename create the large object xh lobjId lo creat conn INV READ INV WRITE if lobjld 0 fprintf stderr can t create large object Win lobj fd lo_open conn lobjId INV WRITE xj while nbytes read fd buf BUFSIZE O tmp lo write conn lobj fd buf nbytes if tmp nbytes fprintf stderr error while reading large object n void close fd void lo close conn lobj_fd read in from the Unix file and write to the inversion file return lobjId d pickout PGconn conn Oid lobjld int start int len int lobj fd char buf int nbytes 113 Chapter 15 Large Objects int nread lobj fd lo_open conn lobjId INV_READ if lobj_fd lt 0 fprintf stderr can t open large object d n lobjId lo lseek conn lobj_ fd start SEEK SET buf malloc len 1 nread 0 while len nread gt 0 nbytes lo read conn lobj_fd buf len nread buf nbytes fprintf stderr gt gt gt s buf nread nbytes fprintf stderr n lo close conn lobj_fd void overwrite PGconn conn Oi
23. ret SPI connect 0 elog WARN trigf fired s SPI connect returned d when ret Get number of tuples in relation ret SPI exec select count from ttest 0 if ret 0 elog WARN trigf fired s SPI exec returned d when ret i SPI getbinval SPI tuptable vals 0 SPI tuptable tupdesc 1 amp isnull elog NOTICE trigf fired s there are d tuples in ttest when i SPI finish if checknull 82 return Chapter 13 Triggers i SPI getbinval rettuple tupdesc 1 amp isnull if isnull rettuple NULL rettuple Now compile and create table ttest x int4 create function trigf returns opaque as path_to_so language c vac gt create for each row CREATE vac gt create for each row CREATE vac gt insert NOTICE trigf INSERT 0 O Insertion vac gt select x 0 rows vac gt insert NOTICE trigf NOTICE trigf trigger tbefore before insert or update or delete on ttest xecute procedure trigf trigger tafter after insert or update or delete on ttest xecute procedure trigf into ttest values null fired before there are 0 tuples in ttest Skipped and AFTER trigger is not fired from ttest into ttest values 1 fired before there are 0 tuples in ttest fired after there are 1 tuples in ttest AAA AU AAA remember what we said about visibility INSERT 167793 1 va
24. which keeps tracks of any system errors and communication between the backend processes The postmaster can take several command line arguments to tune its behavior However supplying arguments is necessary only if you intend to run multiple sites or a non default site The Postgres backend the actual executable program postgres may be executed directly from the user shell by the Postgres super user with the database name as an argument However doing this bypasses the shared buffer pool and lock table associated with a postmaster site therefore this is not recommended in a multiuser site Notation or usr local pgsql at the front of a file name is used to represent the path to the Postgres superuser s home directory In a command synopsis brackets and indicate an optional phrase or keyword Anything in braces and and containing vertical bars indicates that you must choose one In examples parentheses and are used to group boolean expressions is the boolean operator OR Examples will show commands executed from various accounts and programs Commands executed from the root account will be preceeded with gt Commands executed from the Postgres superuser account will be preceeded with while commands executed from an unprivileged user s account will be preceeded with SOL commands will be preceeded with gt or will have no leading prompt depending on the context Note At the time of writ
25. will only do it s scan s if there ever could be something to do Rules will only be significant slower than triggers if their actions result in large and bad qualified joins a situation where the optimizer fails They are a big hammer Using a big hammer without caution can cause big damage But used with the right touch they can hit any nail on the head 51 Chapter 9 Interfacing Extensions To Indices The procedures described thus far let you define a new type new functions and new operators However we cannot yet define a secondary index such as a B tree R tree or hash access method over a new type or its operators Look back at The major Postgres system catalogs The right half shows the catalogs that we must modify in order to tell Postgres how to use a user defined type and or user defined operators with an index 1 e pg_am pg_amop pg_amproc pg_operator and pg_opclass Unfortunately there is no simple command to do this We will demonstrate how to modify these catalogs through a running example a new operator class for the B tree access method that stores and sorts complex numbers in ascending absolute value order The pg_am class contains one instance for every user defined access method Support for the heap access method is built into Postgres but every other access method is described here The schema is Table 9 1 Index Schema amkind not used at present but set to o as a place holder amstrategies nu
26. 1 sallim ALIAS FOR 2 BEGIN IF emprec salary ISNULL THEN RETURN f END IF RETURN emprec salary sallim END LANGUAGE plpgsqdl PL pgSQL Trigger Procedure This trigger ensures that any time a row is inserted or updated in the table the current username and time are stamped into the row And it ensures that an employees name is given and that the salary is a positive value CREATE TABLE emp empname text Salary int4 last date datetime last user name CREATE FUNCTION emp stamp RETURNS OPAQUE AS BEGIN Check that empname and salary are given IF NEW empname ISNULL THEN RAISE EXCEPTION empname cannot be NULL value END IF IF NEW salary ISNULL THEN RAISE EXCEPTION cannot have NULL salary NEW empname END IF 70 Chapter 11 Procedural Languages Who works for us when she must pay for IF NEW salary lt 0 THEN RAISE EXCEPTION cannot have a negative salary NEW empname END IF Remember who changed the payroll when NEW last date now NEW last user getpgusername RETURN NEW END LANGUAGE plpgsqdl CREATE TRIGGER emp stamp BEFORE INSERT OR UPDATE ON emp FOR EACH ROW EXECUTE PROCEDURE emp stamp PL Tcl PL Tcl is a loadable procedural language for the Postgres database system that enables the Tcl language to be used to create functions and trigger procedures This package was originally written by Jan Wieck Overview PL Tcl offers most
27. 33 Chapter 8 The Postgres Rule System planner optimizer would be exactly the same as 1f Al had typed the above SELECT query instead of the view selection Now we face Al with the problem that the Blues Brothers appear in his shop and want to buy some new shoes and as the Blues Brothers are they want to wear the same shoes And they want to wear them immediately so they need shoelaces too Al needs to know for which shoes currently in the store he has the matching shoelaces color and size and where the total number of exactly matching pairs is greater or equal to two We theach him how to do and he asks his database al bundy SELECT FROM shoe ready WHERE total avail gt 2 shoename sh avail sl name sl avail total avail sh1 2 s11 5 2 sh3 4 s17 7 4 2 rows Al is a shoe guru and so he knows that only shoes of type sh1 would fit shoelace sl7 is brown and shoes that need brown shoelaces aren t shoes the Blues Brothers would ever wear The output of the parser this time is the parsetree SELECT shoe ready shoename shoe ready sh avail shoe ready sl name shoe ready sl avail shoe ready total avail FROM shoe ready shoe ready WHERE int4ge shoe ready total avail 2 The first rule applied will be that one for the shoe ready relation and it results in the parsetree SELECT rsh shoename rsh sh avail rsl sl name rsl sl avail min rsh sh avail rsl sl av
28. Chapter 11 Procedural Languages CREATE FUNCTION funcname argument types RETURNS returntype AS PL Tcl function body LANGUAGE pltcl When calling this function in a query the arguments are given as variables 1 n to the Tcl procedure body So a little max function returning the higher of two int4 values would be created as CREATE FUNCTION tcl max int4 int4 RETURNS int4 AS if 1 2 return 1 return 2 LANGUAGE pltcl Composite type arguments are given to the procedure as Tcl arrays The element names in the array are the attribute names of the composite type If an attribute in the actual row has the NULL value it will not appear in the array Here is an example that defines the overpaid 2 function as found in the older Postgres documentation in PL Tcl CREATE FUNCTION overpaid 2 EMP RETURNS bool AS if 200000 0 1 salary return t if 1 age lt 30 amp amp 100000 0 lt 1 salary return t return f LANGUAGE pltcl Global Data in PL Tcl Sometimes especially when using the SPI functions described later it is useful to have some global status data that is held between two calls to a procedure All PL Tcl procedures executed in one backend share the same safe Tcl interpreter To help protecting PL Tcl procedures from side effects an array is made available to each procedure via the upvar command The global name of this variable is the procedures internal name and the lo
29. Database Connection Functions esses enne eene tenentes 140 Query Execution Functions hene teen tto tte ede eR E di s 140 Asynchronous Notification ienee nate saa a E EE ESE E EE EE nente eterne 144 Functions Associated with the COPY Command sese 144 RON 145 18 Patel ROO 146 ommands unu E E S 146 Examples torta ir 147 pgtcl Command Reference Information eese nennen nennen 147 pa connect pe Red is 147 pg discOnn Ct a cii etre ted eed e e reis 149 pg conndefaults 2 5 bett ete pere bore iier 150 paB execcuhuvUncmcs eR DU a dM Due 151 jum Mia E 152 pg select ati p eo e RU o poe 153 PRAISED oos o eee UD HEB edis 155 POOL CLEA i eee eta etre iii 156 TS ete dba e nd ret tete 157 palos RR ae rete Sa ERE CERA 158 juu E E 158 POMOC Write sisi Loros 160 DIAMETRI 161 pelotitas ais niea 162 PR AGAUNIINK 5 reet t Re e n RERO ERR ER 162 Pelo POr ii D eoi SS 163 pe lo iaa eR eI e eiie e eee egt eh tes 164 19 ecpg Embedded SQL in C eere ee ee ee ee eese eese ee etse ense to setas toss toss tassa 165 Why Embedded SQL rics scat eet rrr TR Pratt ei 165 The Concept eO RUSO o a 165 How To Use ERP o hehe Pani ee ae eite 165 PreprOCOSSOR datos ed te toed pss ir t p hm en eed 165 o a eee I a Ast 166 Error handling ette tem e et iet edi 166 IECUR M 168 Porting From Other RDBMS Packages ooooconocononconnconcnonco
30. INSERT INTO shoelace VALUES sl19 0 pink 35 0 Inch INSERT INTO shoelace VALUES s110 1000 magent CREATE VIEW shoelace obsolete AS SELECT FROM shoelace WHERE NOT EXISTS SELECT shoename FROM shoe WHERE slcolor It s output is a 40 0 al bundy SELECT FROM shoelace obsolete sl avail sl color Sl name 0 1000 pink magenta CREATE VIEW shoelac SELECT FROM shoelace obsolete WHERE sl avail and do it this way candelet currently not available but he committed to buy some he also prepared his database for pink ones 0 0 inch 0 0 Since this happens often we must lookup for shoelace entries that fit for absolutely no shoe sometimes We could do that in a complicated statement every time or we can setup a view for it The view for this is sl color sl len sl unit sl1 len cm 35 40 AS DELETE FROM shoelace WHERE EXISTS SELECT FROM shoelace candelete WHERE sl name shoelace sl name Voila al bundy SELECT FROM shoelace Sl name sl avail sl color sl len sll 5 black 80 s12 6 black 100 s17 6 brown 60 S14 8 black 40 s13 10 black 35 s18 21 brown 40 s110 1000 magenta 40 s15 4 brown L sl6 20 brown 0 9 9 rows inch inch Sl unit cm inch inch inch inch m 88 9 101 6 For the 1000 magenta shoelaces we must debt Al before we can throw
31. N amp data putline len data endcopy Caveats The query buffer is 8192 bytes long and queries over that length will be silently truncated 145 Chapter 18 pgtcl pgtcl is a tcl package for front end programs to interface with Postgres backends It makes most of the functionality of libpq available to tcl scripts This package was originally written by Jolly Chen Commands Table 18 1 pgtcl Commands These commands are described further on subsequent pages The pg_lo routines are interfaces to the Large Object features of Postgres The functions are designed to mimic the analogous file system functions in the standard Unix file system interface The pg_lo routines should be used within a BEGIN END transaction block because 146 Chapter 18 pgtcl the file descriptor returned by pg_lo_open is only valid for the current transaction pg_lo_import and pg_lo_export MUST be used in a BEGIN END transaction block Examples Here s a small example of how to use the routines getDBs get the names of all the databases at a given host and port number E with the defaults being the localhost and port 5432 H return them in alphabetical order proc getDBs host localhost port 5432 datnames is the list to be result Set conn pg connect templatel host Shost port Sport set res pg exec conn SELECT datname FROM pg database ORDER BY datname set ntups pg result res numTuples for set i 0
32. NEW shoelace data OLD shoelace log shoelace log WHERE int4ne 6 shoelace data sl avail AND bpchareq shoelace data sl name s17 That s it So reduced to the max the return from the rule system is a list of two parsetrees that are the same as the statements INSERT INTO shoelace log SELECT shoelace data sl name 6 getpgusername now FROM shoelace data WHERE 6 shoelace data sl avail 41 Chapter 8 The Postgres Rule System AND shoelace data sl name s17 UPDATE shoelace data SET sl avail 6 WHERE sl name s17 These are executed in this order and that is exactly what the rule defines The subtitutions and the qualifications added ensure that if the original query would be an UPDATE shoelace data SET sl color green WHERE sl name s17 No log entry would get written because due to the fact that this time the original parsetree does not contain a targetlist entry for sl avail NEW sl_avail will get replaced by shoelace_data sl_avail resulting in the extra query INSERT INTO shoelace_log SELECT shoelace data sl name shoelace data sl avail getpgusername now FROM shoelace data WHERE shoelace data sl avail shoelace data sl avail AND shoelace data sl name s17 and that qualification will never be true Since the is no difference on parsetree level between an INSERT SELECT and an INSERT VALUES it will also work if the original query modifies multiple rows So if Al woul
33. Note that you can safely skip the call to SPI_finish if you abort the transaction via elog ERROR Algorithm SPI finish performs the following Disconnects your procedure from the SPI manager and frees all memory allocations made by your procedure via palloc since the SPI connect These allocations can t be used any more See Memory management 88 Chapter 14 Server Programming Interface SPI exec Name SPI exec Creates an execution plan parser planner optimizer and executes a query Synopsis SPI exec query tcount Inputs char query String containing query plan int tcount Maximum number of tuples to return Outputs int SPI OK EXEC if properly disconnected SPI ERROR UNCONNECTED if called from an un connected procedure SPI ERROR ARGUMENT if query is NULL or tcount lt 0 SPI ERROR UNCONNECTED if procedure is unconnected SPI ERROR COPY if COPY TO FROM stdin SPI ERROR CURSOR if DECLARE CLOSE CURSOR FETCH SPI ERROR TRANSACTION if BEGIN ABORT END SPI ERROR OPUNKNOWN if type of query is unknown this shouldn t occur If execution of your query was successful then one of the following non negative values will be returned SPI OK UTILITY if some utility e g CREATE TABLE was executed SPI OK SELECT if SELECT but not SELECT INTO was executed SPI OK SELINTO if SELECT INTO was executed SPI OK INSERT if INSERT or INSERT SELECT was executed SPI OK DELETE if DELETE was
34. TBL1 DO INSERT INTO TBL2 values new i NOTIFY TBL2 and do INSERT INTO TBL1 values 10 include lt stdio h gt include libpq fe h voi exi mai bac d t_nicely PGconn conn PQfinish conn exit 1 n char pghost pgport pgoptions pgtty char dbName int nFields int i Ji PGconn conn PGresult res PGnotify notify begin by setting the parameters for a backend connection if the parameters are null then the system will try to use reasonable defaults by looking up environment variables or failing that using hardwired constants 2 pghost NULL host name of the backend server pgport NULL port of the backend server pgoptions NULL special options to start up the kend server pgtty NULL debugging tty for the backend server dbName getenv USER change this to the name of your test database make a connection to the database 133 Chapter 16 libpq conn PQsetdb pghost pgport pgoptions pgtty dbName check to see that the backend connection was successfully made xf if POstatus conn CONNECTION BAD fprintf stderr Connection to database s failed n dbName fprintf stderr s PQerrorMessage conn exit nicely conn res PQexec conn LISTEN TBL2 if res POresultStatus res PGRES COMMAND OK fprintf stderr LISTEN command failed n Poclear res exit nicely conn
35. Too many arguments line d This means that Postgres has returned more arguments than we have matching variables Perhaps you have forgotten a couple of the host variables in the INTO varl var2 list 202 Too few arguments line d This means that Postgres has returned fewer arguments than we have host variables Perhaps you have too many host variables in the INTO var1 var2 list 203 Too many matches line 96d This means that the query has returned several lines but the variables specified are no arrays The SELECT you made probably was not unique 204 Not correctly formatted int type s line d This means that the host variable is of an int type and the field in the Postgres database is of another type and contains a value that cannot be interpreted as an int The library uses strtol for this conversion 205 Not correctly formatted unsigned type 96s line d This means that the host variable is of an unsigned int type and the field in the Postgres database is of another type and contains a value that cannot be interpreted as an unsigned int The library uses strtoul for this conversion 206 Not correctly formatted floating point type s line 96d This means that the host variable is of a float type and the field in the Postgres database is of another type and contains a value that cannot be interpreted as an float The library uses strtod for this conversion 207 Unable to convert 96s to bool on line d This means that
36. Tuple and field indices start at O const char PgDatabase GetValue int tup num int field num For most queries the value returned by GetValue is a null terminated ASCII string representation of the attribute value But if BinaryTuples is TRUE the value returned by Get Value is the binary representation of the type in the internal format of the backend server but not including the size word if the field is variable length It is then the programmer s responsibility to cast and convert the data to the correct C type The pointer returned by GetValue points to storage that is part of the PGresult structure One should not modify it and one must explicitly copy the value into other storage if it is to be used past the lifetime of the PGresult structure itself BinaryTuples is not yet implemented GetValue Returns a single field attribute value of one tuple of a PGresult Tuple and field indices start at O const char PgDatabase GetValue int tup num const char field name For most queries the value returned by GetValue is a null terminated ASCII string representation of the attribute value But if BinaryTuples is TRUE the value returned by Get Value is the binary representation of the type in the internal format of the backend server but not including the size word if the field is variable length It is then the programmer s responsibility to cast and convert the data to the correct C type The pointer returned by GetValue
37. UPDATE with the rule qualification expression int4ne NEW sl avail OLD sl avail and one action INSERT INTO shoelace log SELECT NEW sl name NEW sl avail getpgusername datetime now text FROM shoelace data NEW shoelace data OLD shoelace log shoelace log Don t trust the output of the pg rules system view It specially handles the situation that there are only references to NEW and OLD in the INSERT and outputs the VALUES format of INSERT In fact there is no difference between an INSERT VALUES and an INSERT SELECT on parsetree level They both have rangetables targetlists and maybe qualifications etc The optimizer later decides if to create an execution plan of type result seqscan 40 Chapter 8 The Postgres Rule System indexscan join or whatever for that parsetree If there are no references to rangetable entries leftin the parsetree it becomes a result execution plan the INSERT VALUES version The rule action above can truely result in both variants The rule is a qualified non INSTEAD rule so the rule system has to return two parsetrees The modified rule action and the original parsetree In the first step the rangetable of the original query is incorporated into the rules action parsetree This results in INSERT INTO shoelace_log SELECT NEW sl_name NEW sl_avai getpgusername datetime now text FROM shoelace data shoelace data shoelace data NEW shoelace data OLD shoelace_l
38. Version Modeling Using Production Rules in a Database System Ong and Goh 1990 L Ong and J Goh April 1990 ERL Technical Memorandum M90 33 University of California Berkeley CA The Postgres Data Model Rowe and Stonebraker 1987 L Rowe and M Stonebraker Sept 1987 VLDB Conference Brighton England 1987 Generalized partial indexes http simon cs cornell edu home praveen papers partindex de95 ps Z P Seshadri and A Swami March 1995 Eleventh International Conference on Data Engineering 1995 Cat No 95CH35724 IEEE Computer Society Press The Design of Postgres Stonebraker and Rowe 1986 M Stonebraker and L Rowe May 1986 Conference on Management of Data Washington DC ACM SIGMOD 1986 The Design of the Postgres Rules System Stonebraker Hanson Hong 1987 M Stonebraker E Hanson and C H Hong Feb 1987 Conference on Data Engineering Los Angeles CA IEEE 1987 The Postgres Storage System Stonebraker 1987 M Stonebraker Sept 1987 VLDB Conference Brighton England 1987 A Commentary on the Postgres Rules System Stonebraker et al 1989 M Stonebraker M Hearst and S Potamianos Sept 1989 Record 18 3 SIGMOD 1989 The case for partial indexes DBMS http s2k ftp CS Berkeley EDU 8000 postgres papers ERL M89 17 pdf Stonebraker M 1989b M Stonebraker Dec 1989 Record 18 no 4 4 11 SIGMOD 1989 The Implementation of Postgres Stonebraker Rowe Hirohama 1990 M
39. Visual Basic and the other RAD tools have Recordset objects that use ODBC directly to access data Using the data aware controls you can quickly link to the ODBC back end database very quickly Playing around with MS Access will help you sort this out Try using File gt Get External Data Tip You ll have to set up a DSN first Unix Installation ApplixWare has an ODBC database interface supported on at least some platforms ApplixWare v4 4 1 has been demonstrated under Linux with Postgres v6 4 using the psqlODBC driver contained in the Postgres distribution Building the Driver The first thing to note about the psqlODBC driver or any ODBC driver is that there must exist a driver manager on the system where the ODBC driver is to be used There exists a freeware ODBC driver for Unix called iodbc which can be obtained from various locations on the Net including at AS200 http www as220 org FreeODBC iodbc 2 12 shar Z Instructions for installing iodbc are beyond the scope of this document but there is a README that can be found inside the iodbc compressed shar file that should explain how to get it up and running Having said that any driver manager that you can find for your platform should support the psqlODBC driver or any ODBC driver The Unix configuration files for psqiODBC have recently been extensively reworked to allow for easy building on supported platforms as well as to allow for support of other Unix platforms in the fu
40. a fifth gets reduced into two queries There is a little detail that s a bit ugly Looking at the two queries turns out that the shoelace data relation appears twice in the rangetable where it could definitely be reduced to one The optimizer does not handle it and so the execution plan for the rule systems output of the INSERT will be Nested Loop Merge Join Seg Scan Sort gt Seq Scan on s Seg Scan gt Sort gt Seq Scan on shoelace arrive gt Seq Scan on shoelace data while omitting the extra rangetable entry would result in a Merge Join gt Seq Scan gt Sort gt Seq Scan on s gt Seq Scan gt Sort gt Seq Scan on shoelace arrive that totally produces the same entries in the log relation Thus the rule system caused one extra scan on the shoelace_data relation that is absolutely not necessary And the same obsolete scan is done once more in the UPDATE But it was a really hard job to make that all possible at all A final demonstration of the Postgres rule system and it s power There is a cute blonde that sells shoelaces And what Al could never realize she s not only cute she s smart too a little too smart Thus it happens from time to time that Al orders shoelaces that are absolutely not sellable This time he ordered 1000 pairs of magenta shoelaces and since another kind is 46 al_bundy gt al_bundy gt al_bundy gt al_bundy gt Chapter 8 The Postgres Rule System
41. and performs the transformations given in the rule bodies One application of the rewrite system is given in the realization of views Whenever a query against a view i e a virtual table is made the rewrite system rewrites the user s query to a query that accesses the base tables given in the view definition instead The planner optimizer takes the rewritten querytree and creates a queryplan that will be the input to the executor It does so by first creating all possible paths leading to the same result For example if there is an index on a relation to be scanned there are two paths for the scan One possibility is a simple sequential scan and the other possibility is to use the index Next the cost for the execution of each plan is estimated and the cheapest plan is chosen and handed back The executor recursively steps through the plan tree and retrieves tuples in the way represented by the plan The executor makes use of the storage system while scanning relations performs sorts and joins evaluates qualifications and finally hands back the tuples derived 190 Chapter 22 Overview of PostgreSQL Internals In the following sections we will cover every of the above listed items in more detail to give a better understanding on Postgres s internal control and data structures How Connections are Established Postgres is implemented using a simple process per user client server model In this model there is one client proc
42. based on Postgres release 4 2 http s2k ftp CS Berkeley EDU 8000 postgres postgres html The Postgres project led by Professor Michael Stonebraker has been sponsored by the Defense Advanced Research Projects Agency DARPA the Army Research Office ARO the National Science Foundation NSF and ESL Inc The first part of this manual explains the Postgres approach to extensibility and describe how users can extend Postgres by adding user defined types operators aggregates and both query language and programming language functions After a discussion of the Postgres rule system we discuss the trigger and SPI interfaces The manual concludes with a detailed description of the programming interfaces and support libraries for various languages We assume proficiency with UNIX and C programming Resources This manual set is organized into several parts Tutorial An introduction for new users Does not cover advanced features User s Guide General information for users including available commands and data types Programmer s Guide Advanced information for application programmers Topics include type and function extensibility library interfaces and application design issues Administrator s Guide Installation and management information List of supported machines Developer s Guide Information for Postgres developers This is intended for those who are contributing to the Postgres project application developme
43. bet is 185 Connecting to the Database sce terere ttt espere Erro 186 Issuing a Query and Processing the Result sese 186 Using the Statement Interface sss 186 Using the ResultSet Interface nisnin e eai eene enne enne nenne 187 Performing Upd tes eee euer i aeree e e IURE eee 187 Closing the Connection 52 ecc ieri reto t the ds cobble ictus dedito Re Pe dusts eus ee e ende 187 sing Earge Objects RO 187 Postgres Extensions to the JDBC API eese enne enne enne 188 Further Reading eee etri eerte t ee ren rere EX Pe een 189 22 Overview of PostgreSQL Internals 4 Lecce eee eee eese eene teens tns ta seta sense ta setas ta sno 190 The Path of a Query een REI ER re ORO ERE t3303 190 How Connections are Established ceeeeesecessceceseceecceceeeesseeceaceeenceceneeesaeceeeeeenaecenees 191 The Parser Stage ote tte eee OR itecto iet ti n 191 lgciou ETE 191 Transformation Process terere reato i oerte erat 193 The Postgres Rule System eite et ee te ee ede teet eite pd 193 The Rewrite System eee teca mpm e en em 193 Techniques To Implement Views essere 194 Plannei Optimizer ici eerte o te e eine tiet eR tegi 195 Generating Possible Plans eerie eite erede es 195 Data Structure of the Plan rr rete er Herne dr ir eene ai nn 195 Io udin eA TEA 196 23 pg OPL ODS sssscrvoseiescecoass
44. casted by the PL pgSQL bytecode interpreter using the result types output and the variables type input functions Note that this could potentially result in runtime errors generated by the types input functions An assignment of a complete selection into a record or row can be done by SELECT expressions INTO target FROM target can be a record a row variable or a comma separated list of variables and record row fields 1f a row or a variable list is used as target the selected values must exactly match the structure of the target s or a runtime error occurs The FROM keyword can be followed by any valid qualification grouping sorting etc that can be given for a SELECT statement There is a special variable named FOUND of type bool that can be used immediately after a SELECT INTO to check if an assignment had success SELECT INTO myrec FROM EMP WHERE empname myname IF NOT FOUND THEN RAISE EXCEPTION employee not found myname END IF If the selection returns multiple rows only the first is moved into the target fields All others are silently discarded Calling another function All functions defined in a Prostgres database return a value Thus the normal way to call a function is to execute a SELECT query or doing an assignment resulting in a PL pgSQL internal SELECT But there are cases where someone isn t interested int the functions result PERFORM query executes a SELECT query over the SP
45. docbook dsssl version 1 41 was used to produce these documents Lennart Staflin s PSGML ftp ftp lysator liu se pub sgml version 1 0 1 in psgml 1 0 1 tar gz was available at the time of writing Important URLs The Jade web page http www jclark com jade The DocBook web page http www ora com davenport The Modular Stylesheets web page http nwalsh com docbook dsssl The PSGML web page http ww w lysator liu se projects about psgml html Steve Pepper s Whirlwind Guide http www infotek no sgmltool guide htm Robin Cover s database of SGML software http www sil org sgml publicS W html Installing Jade Installing Jade 1 Read the installation instructions at the above listed URL 2 Unzip the distribution kit in a suitable place The command to do this will be something like unzip aU jadel_1 zip 239 4 2 Appendix DG2 Documentation Jade is not built using GNU Autoconf so you ll need to edit a Makefile yourself Since James Clark has been good enough to prepare his kit for it it is a good idea to make a build directory named for your machine architecture perhaps under the main directory of the Jade distribution copy the file Makefile from the main directory into it edit it there and then run make there However the Makefile does need to be edited There is a file called Makefile jade in the main directory which is intended to be used with make f Makefile jade when building Jade as oppose
46. dp b ie era ar e etit 113 16 Ie AX 117 Database Connection Functions essere nennen nennen rennen 117 Query Execution Functions esses eene nennen nennen entren trennen innen 120 Asynchronous Query Processidae naini anneannesi iann n nnne entente 124 Fast Path sioe tte oti e ds 126 Asynchronous Notification eeesseseseeseeseeeeeeee nennen nennen entente ennt nrenn etre enne 126 Functions Associated with the COPY Command serere 127 AP re then ere rp 129 libpq Control Functions sss en E e a cnn nnnn cnn a nono nennen trennen nennen trennen enne 129 User Authentication Functions eene nennen nennen nennen enne 129 Environment Variables aio ec a aep aped tnde 130 Caveats s aetate nare e P ri p rtp 131 Sample Progr ms 23 2 eerte Ie d Aere tr e m tete e e rh eoe bebo 131 Sample Program sobs s e tt e Pre b aeter teens 131 iii sample Progr m 2 ii oco cn odere self iid E P tO EE EE soe 133 Sample PrOBr tmn 3 eter iet gehe arsres creada 134 17 libpq C Binding oomoononoononoononcononnoncnonnoncnonooncononooncnnononnononcnronooncn rocoso cn inejo 138 Control and Initialization enans e e i a a a i a a oiai 138 Environment Variables nia its ie EER EE 138 WB LIS Clas ucraniana in rada 139 Connection Class PgConneGtiOn uusee n uU as 139 Database Classi Pgbatabase lcu oie EP RISE cuavbents GrP E ERR E Re EP UE 139
47. em away but that s another problem The pink entry we delete To make it a little harder for Postgres we don t delete it directly Instead we create one more view 0 sl len cm 101 6 100 90 A DELETE on a view with a subselect qualification that in total uses 4 nesting joined views where one of them itself has a subselect qualification containing a view and where calculated view columns are used gets rewritten into one single parsetree that deletes the requested data from a real table I think there are only a few situations out in the real world where such a construct is necessary But it makes me feel comfortable that it works The truth is Doing this found one more bug while writing this document But after fixing that was a little amazed that it works at all 47 Chapter 8 The Postgres Rule System Rules and Permissions Due to rewriting of queries by the Postgres rule system other tables views than those used in the original query get accessed Using update rules this can include write access to tables Rewrite rules don t have a separate owner The owner of a relation table or view is automatically the owner of the rewrite rules that are defined for it The Postgres rule system changes the behaviour of the default access control system Relations that are used due to rules get checked during the rewrite against the permissions of the relation owner the rule is defined on This means that a user does only n
48. fixed length pass by reference fixed length pass by reference variable length By value types can only be 1 2 or 4 bytes in length even 1f your computer supports by value types of other sizes Postgres itself only passes integer types by value You should be careful to define your types such that they will be the same size in bytes on all architectures For example the long type is dangerous because it is 4 bytes on some machines and 8 bytes on 13 Chapter 4 Extending SOL Functions others whereas int type is 4 bytes on most UNIX machines though not on most personal computers A reasonable implementation of the int4 type on UNIX machines might be 4 byte integer passed by value typedef int int4 On the other hand fixed length types of any size may be passed by reference For example here is a sample implementation of a Postgres type 16 byte structure passed by reference typedef struct double x y Point Only pointers to such types can be used when passing them in and out of Postgres functions Finally all variable length types must also be passed by reference All variable length types must begin with a length field of exactly 4 bytes and all data to be stored within that type must be located in the memory immediately following that length field The length field is the total length of the structure 1 e it includes the size of the length field itself We can define the text type as follows type
49. fragmentation of the page Page size is stored in each page because frames in the buffer pool may be subdivided into equal sized pages on a frame by frame basis within a class The internal fragmentation information is used to aid in determining when page reorganization should occur 223 Files Bugs Chapter 29 Page Files Following the page header are item identifiers ItemIdData New item identifiers are allocated from the first four bytes of unallocated space Because an item identifier is never moved until it is freed its index may be used to indicate the location of an item on a page In fact every pointer to an item ItemPointer created by Postgres consists of a frame number and an index of an item identifier An item identifier contains a byte offset to the start of an item its length in bytes and a set of attribute bits which affect its interpretation The items themselves are stored in space allocated backwards from the end of unallocated space Usually the items are not interpreted However when the item is too long to be placed on a single page or when fragmentation of the item is desired the item is divided and each piece is handled as distinct items in the following manner The first through the next to last piece are placed in an item continuation structure ItemContinuationData This structure contains itemPointerData which points to the next piece and the piece itself The last piece is handled normally data L
50. gets a unique operator id According to the types of the attributes used within the qualifications etc the appropriate operator ids have to be used Executor The executor takes the plan handed back by the planner optimizer and starts processing the top node In the case of our example the query given in example ref simple_select the top node is a MergeJoin node Before any merge can be done two tuples have to be fetched one from each subplan So the executor recursively calls itself to process the subplans it starts with the subplan attached to lefttree The new top node the top node of the left subplan is a SeqScan node and again a tuple has to be fetched before the node itself can be processed The executor calls itself recursively another time for the subplan attached to lefttree of the SeqScan node Now the new top node is a Sort node As a sort has to be done on the whole relation the executor starts fetching tuples from the Sort node s subplan and sorts them into a temporary relation in memory or a file when the Sort node is visited for the first time Further examinations of the Sort node will always return just one tuple from the sorted temporary relation Every time the processing of the Sort node needs a new tuple the executor is recursively called for the SeqScan node attached as subplan The relation internally referenced by the value given in the scanrelid field is scanned for the next tuple If the tuple satisfi
51. in the from clause of the SQL query a RangeVar node is created holding the name of the alias and a pointer to a RelExpr node holding the name of the relation All Range Var nodes are collected in a list which is attached to the field fromClause of the SelectStmt node For every entry appearing in the select list of the SQL query a ResTarget node is created holding a pointer to an Attr node The Attr node holds the relation name of the entry and a pointer to a Value node holding the name of the attribute All ResTarget nodes are collected to a list which is connected to the field targetList of the SelectStmt node Figure wef where clause shows the operator tree built for the where clause of the SQL query given in example A Simple SelectThis example contains the following simple query that will be used in various descriptions and figures throughout the following sections The query assumes that the tables given in The Supplier Database have already been defined select s sname se pno from supplier s sells se where s sno gt 2 and s sno se sno which is attached to the field qual of the SelectStmt node The top node of the operator tree is an A_Expr node representing an AND operation This node has two successors called lexpr and rexpr pointing to two subtrees The subtree attached to lexpr represents the qualification s sno gt 2 and the one attached to rexpr represents s sno se sno For every attribute an Attr node is created holding the name of
52. inside the database backend it should only be used for languages that don t gain access to database backends internals or the filesystem The languages PL pgSQL and PL Tcl are known to be trusted Example 1 The following command tells the database where to find the shared object for the PL pgSQL languages call handler function CREATE FUNCTION plpgsql1 call handler RETURNS OPAQUE AS usr local pgsql lib plpgsql so LANGUAGE C 2 The command CREATE TRUSTED PROCEDURAL LANGUAGE plpgsql HANDLER plpgsql_call_ handler LANCOMPILER PL pgSQL 61 Chapter 11 Procedural Languages then defines that the previously declared call handler function should be invoked for functions and trigger procedures where the language attribute is plpgsql PL handler functions have a special call interface that is different from regular C language functions One of the arguments given to the handler is the object ID in the pg_proc tables entry for the function that should be executed The handler examines various system catalogs to analyze the functions call arguments and it s return data type The source text of the functions body is found in the prosrc attribute of pg proc Due to this in contrast to C language functions PL functions can be overloaded like SQL language functions There can be multiple different PL functions having the same function name as long as the call arguments differ Procedural languages defined in the
53. l res PQexec conn DECLARE mycursor CURSOR FOR select from pg database if res POresultStatus res PGRES COMMAND OK fprintf stderr DECLARE CURSOR command failed n Poclear res exit nicely conn PQclear res res PQexec conn FETCH ALL in mycursor if res POresultStatus res PGRES TUPLES OK fprintf stderr FETCH ALL command didn t return tuples properly n Poclear res exit nicely conn first print out the attribute names nFields POnfields res for i 0 i nFields 1 printf 15s POfname res i printf inn next print out the instances for i 0 i PQntuples res i for j 0 j lt nFields j printf 15s PQgetvalue res i j printf Mn PQclear res close the cursor res PQexec conn CLOSE mycursor Poclear res commit the transaction res PQexec conn COMMIT Poclear res close the connection to the database and cleanup PQfinish conn 132 Chapter 16 libpq fclose debug Sample Program 2 FF ok ook ook ok ok ok ob ob HF HF HF HF HH HF HH testlibpq2 c Test of the asynchronous notification interface Start this program then from psql in another window do NOTIFY TBL2 Or if you want to get fancy try this Populate a database with the following CREATE TABLE TBL1 i int4 CREATE TABLE TBL2 i int4 CREATE RULE r1 AS ON INSERT TO
54. lib postgresql jar java uk org retep finder Main Loading the driver is covered later on in this chapter Preparing the Database for JDBC Because Java can only use TCP IP connections the Postgres postmaster must be running with the i flag 184 Chapter 21 JDBC Interface Also the pg_hba conf file must be configured It s located in the PGDATA directory In a default installation this file permits access only by UNIX domain sockets For the JDBC driver to connect to the same localhost you need to add something like host all 127 0 0 1 255 255 255 255 password Here access to all databases are possible from the local machine with JDBC The JDBC Driver supports trust ident password and crypt authentication methods Using the Driver This section is not intended as a complete guide to JDBC programming but should help to get you started For more information refer to the standard JDBC API documentation Also take a look at the examples included with the source The basic example is used here Importing JDBC Any source that uses JDBC needs to import the java sql package using import java sql Important Do not import the postgresql package If you do your source will not compile as javac will get confused Loading the Driver Before you can connect to a database you need to load the driver There are two methods available and it depends on your code to the best one to use In the first method your code implicit
55. line correctly and does not for example mistake the end of a long data line for a terminator line The code in src bin psql psql c contains routines that correctly handle the copy protocol PQgetlineAsync Reads a newline terminated line of characters transmitted by the backend server into a buffer without blocking int PQgetlineAsync PGconn conn char buffer int bufsize 127 Chapter 16 libpq This routine is similar to PQgetline but it can be used by applications that must read COPY data asynchronously that is without blocking Having issued the COPY command and gotten a PGRES COPY OUT response the application should call POconsumelnput and PQgetlineAsync until the end of data signal is detected Unlike PQgetline this routine takes responsibility for detecting end of data On each call POgetlineAsync will return data if a complete newline terminated data line is available in libpq s input buffer or if the incoming data line is too long to fit in the buffer offered by the caller Otherwise no data is returned until the rest of the line arrives The routine returns 1 if the end of copy data marker has been recognized or 0 if no data is available or a positive number giving the number of bytes of data returned If 1 is returned the caller must next call PQendcopy and then return to normal processing The data returned will not extend beyond a newline character If possible a whole line will be returned at one time But
56. next E g the query tree 2 do 4 IN 1 is encoded by the integer string 4 1 3 2 which means first join relation 4 and 1 then 3 and then 2 where 1 2 3 4 are relids in Postgres Parts of the GEQO module are adapted from D Whitley s Genitor algorithm Specific characteristics of the GEQO implementation in Postgres are Usage of a steady state GA replacement of the least fit individuals in a population not whole generational replacement allows fast convergence towards improved query plans This is essential for query handling with reasonable time Usage of edge recombination crossover which is especially suited to keep edge losses low for the solution of the TSP by means of a GA Mutation as genetic operator is deprecated so that no repair mechanisms are needed to generate legal TSP tours The GEQO module gives the following benefits to the Postgres DBMS compared to the Postgres query optimizer implementation Handling of large join queries through non exhaustive search Improved cost size approximation of query plans since no longer plan merging is needed the GEQO module evaluates the cost for a query plan as an individual 201 Chapter 24 Genetic Query Optimization in Database Systems Future Implementation Tasks for Postgres GEQO Basic Improvements Improve freeing of memory when query is already processed With large join queries the computing time spent for the genetic que
57. no visible result at all Note that if the current query is part of a transaction cancellation will abort the whole transaction PQrequestCancel can safely be invoked from a signal handler So it is also possible to use it in conjunction with plain PQexec if the decision to cancel can be made in a signal handler For example psql invokes PQrequestCancel from a SIGINT signal handler thus allowing interactive cancellation of queries that it issues through PQexec Note that PQrequestCancel will have no effect if the connection is not currently open or the backend is not currently processing a query 125 Chapter 16 libpq Fast Path Postgres provides a fast path interface to send function calls to the backend This is a trapdoor into system internals and can be a potential security hole Most users will not need this feature PQfn Request execution of a backend function via the fast path interface PGresult PQfn PGconn conn int fnid int result_buf int result_len int result_is int PQArgBlock args int nargs The fnid argument is the object identifier of the function to be executed result buf is the buffer in which to place the return value The caller must have allocated sufficient space to store the return value there is no check The actual result length will be returned in the integer pointed to by result len If a 4 byte integer result is expected set result is int to 1 otherwise set it to 0 Setting result is i
58. of the other source files used for the documentation The primary source files are postgres sgml This is the integrated document including all other documents as parts Output is generated in HTML since the browser interface makes it easy to move around all of the documentation by just clicking The other documents are available in both HTML and hardcopy tutorial sgml The introductory tutorial with examples Does not include programming topics and is intended to help a reader unfamiliar with SQL This is the getting started document user sgml The User s Guide Includes information on data types and user level interfaces This is the place to put information on why reference sgml The Reference Manual Includes Postgres SQL syntax This is the place to put information on how programming sgml The Programmer s Guide Includes information on Postgres extensibility and on the programming interfaces admin sgml The Administrator s Guide Include installation and release notes 235 Appendix DG2 Documentation Styles and Conventions DocBook has a rich set of tags and constructs and a suprisingly large percentage are directly and obviously useful for well formed documentation The Postgres documentation set has only recently been adapted to SGML and in the near future several sections of the set will be selected and maintained as prototypical examples of DocBook usage Also a short summary of DocBook tags will be in
59. plans that are really used during the entire lifetime of the database connection Except for input output conversion and calculation functions for user defined types anything that can be defined in C language functions can also be done with PL pgSQL It is possible to create complex conditional computation functions and later use them to define operators or use them in functional indices 62 Chapter 11 Procedural Languages Description Structure of PL pgSQL The PL pgSQL language is case insensitive All keywords and identifiers can be used in mixed upper and lowercase PL pgSQL is a block oriented language A block is defined as lt lt label gt gt DECLARE declarations BEGIN statements END There can be any number of subblocks in the statement section of a block Subblocks can be used to hide variables from outside a block of statements The variables declared in the declarations section preceding a block are initialized to their default values every time the block is entered not only once per function call It is important not to misunderstand the meaning of BEGIN END for grouping statements in PL pgSQL and the database commands for transaction control Functions and trigger procedures cannot start or commit transactions and Postgres does not have nested transactions Comments There are two types of comments in PL pgSQL A double dash starts a comment that extends to the end of the line A start
60. query happen to match a key of an index further plans will be considered After all feasible plans have been found for scanning single relations plans for joining relations are created The planner optimizer considers only joins between every two relations for which there exists a corresponding join clause i e for which a restriction like where rell attrl rel2 attr2 exists in the where qualification All possible plans are generated for every join pair considered by the planner optimizer The three possible join strategies are nested iteration join The right relation is scanned once for every tuple found in the left relation This strategy is easy to implement but can be very time consuming merge sort join Each relation is sorted on the join attributes before the join starts Then the two relations are merged together taking into account that both relations are ordered on the join attributes This kind of join is more attractive because every relation has to be scanned only once hash join the right relation is first hashed on its join attributes Next the left relation is scanned and the appropriate values of every tuple found are used as hash keys to locate the tuples in the right relation Data Structure of the Plan Here we will give a little description of the nodes appearing in the plan Figure ref plan shows the plan produced for the query in example ref simple_select The top node of the plan is a MergeJoin node which has
61. sql User Defined Types Functions Needed for a User Defined Type A user defined type must always have input and output functions These functions determine how the type appears in strings for input by the user and output to the user and how the type is organized in memory The input function takes a null delimited character string as its input and returns the internal in memory representation of the type The output function takes the internal representation of the type and returns a null delimited character string Suppose we want to define a complex type which represents complex numbers Naturally we choose to represent a complex in memory as the following C structure typedef struct Complex double Xx double Y Complex and a string of the form x y as the external string representation These functions are usually not hard to write especially the output function However there are a number of points to remember When defining your external string representation remember that you must eventually write a complete and robust parser for that representation as your input function Complex complex in char str double x y Complex result if sscanf str 1f 1f amp x amp y 2 elog WARN complex in error in parsing return NULL result Complex palloc sizeof Complex result x x result y y return result The output function can simply be char complex out
62. such an object one first needs the apropriate environment for the backend to access The following constructors deal with making a connection to a backend server from a C program 139 Chapter 17 libpg C Binding Database Connection Functions PgConnection makes a new connection to a backend database server PgConnection PgConnection const char conninfo Although typically called from one of the access classes a connection to a backend server is possible by creating a PgConnection object ConnectionBad returns whether or not the connection to the backend server succeeded or failed int PgConnection ConnectionBad Returns TRUE if the connection failed Status returns the status of the connection to the backend server ConnStatusType PgConnection Status Returns either CONNECTION OK or CONNECTION BAD depending on the state of the connection PgDatabase makes a new connection to a backend database server PgDatabase const char conninfo After a PgDatabase has been created it should be checked to make sure the connection to the database succeded before sending queries to the object This can easily be done by retrieving the current status of the PgDatabase object with the Status or ConnectionBad methods DBName Returns the name of the current database const char PgConnection DBName Notifies Returns the next notification from a list of unhandled notification messages received from the backend PGnotify
63. template database are automatically defined in all subsequently created databases So the database administrator can decide which languages are available by default PL pgSQL PL pgSQL is a loadable procedural language for the Postgres database system This package was originally written by Jan Wieck Overview The design goals of PL pgSQL were to create a loadable procedural language that can be used to create functions and trigger procedures adds control structures to the SQL language can perform complex computations inherits all user defined types functions and operators can be defined to be trusted by the server is easy to use The PL pgSQL call handler parses the functions source text and produces an internal binary instruction tree on the first time the function is called by a backend The produced bytecode is identified in the call handler by the object ID of the function This ensures that changing a function by a DROP CREATE sequence will take effect without establishing a new database connection For all expressions and SQL statements used in the function the PL pgSQL bytecode interpreter creates a prepared execution plan using the SPI managers SPI_prepare and SPI_saveplan functions This is done the first time the individual statement is processed in the PL pgSQL function Thus a function with conditional code that contains many statements for which execution plans would be required will only prepare and save those
64. that don t use tables and as both JadeTeX and the style sheets are under continuous improvement it will certainly get better over time To install and use JadeTeX you will need a working installation of TeX and LaTeX2e including the supported tools and graphics packages Babel AMS fonts and AMS LaTeX the PSNESS extension and companion kit of the 35 fonts the dvips program for generating PostScript the macro packages fancyhdr hyperref minitoc url and ot2enc and of course JadeTeX itself All of these can be found on your friendly neighborhood CTAN site JadeTeX does not at the time of writing come with much of an installation guide but there is a makefile which shows what is needed It also includes a directory cooked wherein you ll find some of the macro packages it needs but not all and not complete at least last we looked Before building the jadetex fmt format file you ll probably want to edit the jadetex ltx file to change the configuration of Babel to suit your locality The line to change looks something like RequirePackage german french english babel 1997 01 23 and you should obviously list only the languages you actually need and have configured Babel for With JadeTeX working you should be able to generate and format TeX output for the PostgreSQL manuals by giving the commands as above in the doc src sgml directory jade t tex d usr local share docbook print docbook dsl D graphics postgres s
65. the SQL statement the SQL statement is performed against the database and you can continue with the result How To Use egpc This section describes how to use the egpc tool Preprocessor The preprocessor is called ecpg After installation it resides in the Postgres bin directory 165 Chapter 19 ecpg Embedded SOL in C Library The ecpg library is called libecpg a or libecpg so Additionally the library uses the libpq library for communication to the Postgres server so you will have to link your program with lecpg Ipq The library has some methods that are hidden but that could prove very useful sometime ECPGdebug int on FILE stream turns on debug logging if called with the first argument non zero Debug logging is done on stream Most SQL statement logs its arguments and result The most important one ECPGdo that is called on almost all SQL statements logs both its expanded string i e the string with all the input variables inserted and the result from the Postgres server This can be very useful when searching for errors in your SQL statements ECPGstatus This method returns TRUE if we are connected to a database and FALSE if not Error handling To be able to detect errors from the Postgres server you include a line like exec sql include sqlca in the include section of your file This will define a struct and a variable with the name sqlca as following struct sqlca char sqlcaid 8 long sqlabc
66. the basics of view rules The SELECT FROM shoelace was interpreted by the parser and produced the parsetree SELECT shoelace sl name shoelace sl avail shoelace sl color shoelace sl len 32 Chapter 8 The Postgres Rule System shoelace sl unit shoelace sl len cm FROM shoelace shoelace and this is given to the rule system The rule system walks through the rangetable and checks if there are rules in pg rewrite for any relation When processing the rangetable entry for shoelace the only one up to now it finds the rule RETshoelace with the parsetree SELECT s sl name s sl avail s sl color s sl len s sl unit float8mul s sl len u un fact AS sl len cm FROM shoelace OLD shoelace NEW shoelace data s unit u WHERE bpchareq s sl unit u un name Note that the parser changed the calculation and qualification into calls to the appropriate functions But in fact this changes nothing The first step in rewriting is merging the two rangetables The resulting parsetree then reads SELECT shoelace sl name shoelace sl avail shoelace sl color shoelace sl len shoelace sl unit shoelace sl len cm FROM shoelace shoelace shoelace OLD shoelace NEW shoelace data s unit u In step 2 it adds the qualification from the rule action to the parsetree resulting in SELECT shoelace sl name shoelace sl avail shoelace sl color shoelace sl len shoelace sl unit shoelace sl len cm FROM shoelace shoelace shoelace OL
67. the fields of the PGconn structure because they are subject to change in the future Beginning in Postgres release 6 4 the definition of struct PGconn is not even provided in libpq fe h If you have old code that accesses PGconn fields directly you can keep using it by including libpq int h too but you are encouraged to fix the code soon PQdb Returns the database name of the connection char POdb PGconn conn PQdb and the next several functions return the values established at connection These values are fixed for the life of the PGconn object PQuser Returns the user name of the connection char PQuser PGconn conn PQpass Returns the password of the connection char PQpass PGconn conn PQhost Returns the server host name of the connection char PQhost PGconn conn PQport Returns the port of the connection char PQport PGconn conn PQtty Returns the debug tty of the connection char PQtty PGconn conn PQoptions Returns the backend options used in the connection char PQoptions PGconn conn PQstatus Returns the status of the connection The status can be CONNECTION OK or CONNECTION BAD ConnStatusType PQstatus PGconn conn A failed connection attempt is signaled by status CONNECTION_BAD Ordinarily an OK status will remain so until PQfinish but a communications failure might result in the status changing to CONNECTION_BAD prematurely In that case the application could try to recover by calling PQreset 119
68. the host variable is of a bool type and the field in the Postgres database is neither t nor f 167 Chapter 19 ecpg Embedded SOL in C 208 Empty query line d Postgres returned PGRES_EMPTY_QUERY probably because the query indeed was empty 220 No such connection s in line d The program tries to access a connection that does not exist 221 Not connected in line d The program tries to access a connection that does exist but is not open 230 Invalid statement name s in line d The statement you are trying to use has not been prepared 400 Postgres error s line d Some Postgres error The message contains the error message from the Postgres backend 401 Error in transaction processing line d Postgres signalled to us that we cannot start commit or rollback the transaction 402 connect could not open database s The connect to the database did not work 100 Data not found line d This is a normal error that tells you that what you are quering cannot be found or we have gone through the cursor Limitations What will never be included and why or what cannot be done with this concept oracles single tasking possibility Oracle version 7 0 on AIX 3 uses the OS supported locks on the shared memory segments and allows the application designer to link an application in a so called single tasking way Instead of starting one client process per application process both the database
69. the optimizer by giving it some idea of how many rows will be eliminated by WHERE clauses that have this form What happens if the constant is on the left you may be wondering Well that s one of the things that COMMUTATOR is for Writing new restriction selectivity estimation functions is far beyond the scope of this chapter but fortunately you can usually just use one of the system s standard estimators for many of your own operators These are the standard restriction estimators eqsel tof neqsel for lt gt intltsel for lt or lt intgtsel for gt or gt It might seem a little odd that these are the categories but they make sense if you think about it will typically accept only a small fraction of the rows in a table lt gt will typically reject only a small fraction lt will accept a fraction that depends on where the given constant falls in the range of values for that table column which it just so happens is information collected by VACUUM ANALYZE and made available to the selectivity estimator will accept a slightly larger fraction than lt for the same comparison constant but they re close enough to 22 JOIN Chapter 6 Extending SOL Operators not be worth distinguishing especially since we re not likely to do better than a rough guess anyhow Similar remarks apply to gt and gt You can frequently get away with using either eqsel or neqsel for operators
70. the relation and a pointer to a Value node holding the name of the attribute For the constant term appearing in the query a Const node is created holding the value 192 Chapter 22 Overview of PostgreSQL Internals Transformation Process The transformation process takes the tree handed back by the parser as input and steps recursively through it If a SelectStmt node is found it is transformed to a Query node which will be the top most node of the new data structure Figure ref transformed shows the transformed data structure the part for the transformed where clause is given in figure ref transformed_where because there was not enough space to show all parts in one figure Now a check is made if the relation names in the FROM clause are known to the system For every relation name that is present in the system catalogs a RTE node is created containing the relation name the alias name and the relation id From now on the relation ids are used to refer to the relations given in the query All RTE nodes are collected in the range table entry list which is connected to the field rtable of the Query node If a name of a relation that is not known to the system is detected in the query an error will be returned and the query processing will be aborted Next it is checked if the attribute names used are contained in the relations given in the query For every attribute that is found a TLE node is created holding a pointer to a Resdom node
71. the view relation shoe the rule system will apply the rules Since the rules have no actions and are INSTEAD the resulting list of parsetrees will be empty and the whole query will become nothing because there is nothing left to be optimized or executed after the rule system is done with it Note This fact might irritate frontend applications because absolutely nothing happened on the database and thus the backend will not return anything for the query Not even a PGRES_EMPTY_QUERY or so will be available in libpq In psql nothing happens This might change in the future A more sophisticated way to use the rule system is to create rules that rewrite the parsetree into one that does the right operation on the real tables To do that on the shoelace view we create the following rules CREATE RULE shoelace_ins AS ON INSERT TO shoelace DO INSTEAD INSERT INTO shoelace data VALUES NEW sl name NEW sl avail NEW sl color NEW sl len NEW sl unit CREATE RULE shoelace upd AS ON UPDATE TO shoelace DO INSTEAD UPDATE shoelace data SET Sl name NEW sl name Sl avail NEW sl avail sl color NEW sl color sl len NEW sl len sl unit NEW sl unit WHERE sl name OLD sl name CREATE RULE shoelace del AS ON DELETE TO shoelace DO INSTEAD DELETE FROM shoelace data WHERE sl name OLD sl name Now there is a pack of shoelaces arriving in Als shop and it has a big partlist Al is not that good in calculating and so we don t want h
72. value from a trigger procedure is one of the strings OK or SKIP or a list as returned by the array get Tcl command If the return value is OK the normal operation INSERT UPDATE DELETE that fired this trigger will take place Obviously SKIP tells the trigger manager to silently suppress the operation The list from array get tells PL Tcl to return a modified row to the trigger manager that will be inserted instead of the one given in NEW INSERT UPDATE only Needless to say that all this is only meaningful when the trigger is BEFORE and FOR EACH ROW Here s a little example trigger procedure that forces an integer value in a table to keep track of the of updates that are performed on the row For new row s inserted the value is initialized to O and then incremented on every update operation CREATE FUNCTION trigfunc modcount RETURNS OPAQUE AS switch TG_op INSERT set NEW S1 O UPDATE set NEW 1 OLD 1 incr NEW 1 default return OK return array get NEW LANGUAGE pltcl CREATE TABLE mytab num int4 modcnt int4 desc text CREATE TRIGGER trig mytab modcount BEFORE INSERT OR UPDATE ON mytab FOR EACH ROW EXECUTE PROCEDURE trigfunc modcount modcnt 13 Chapter 11 Procedural Languages Database Access from PL Tcl The following commands are available to access the database from the body of a PL Tcl procedure elog level msg Fire a log message Possible levels are NOTICE WARN ERROR FA
73. value must be written as Spaces around the equal sign are optional The currently recognized parameter keywords are host host to connect to If a non zero length string is specified TCP IP communication is used Without a host name libpq will connect using a local Unix domain socket port port number to connect to at the server host or socket filename extension for Unix domain connections dbname database name user user name for authentication password password used if the backend demands password authentication authtype authorization type No longer used since the backend now chooses how to authenticate users libpq still accepts and ignores this keyword for backward compatibility options trace debug options to send to backend tty file or tty for optional debug output from backend Like PQsetdbLogin PQconnectdb uses environment variables or built in default values for unspecified options PQconndefaults Returns the default connection options PQconninfoOption PQconndefaults void struct PQconninfoOption char keyword The keyword of the option char envvar Fallback environment variable name char compiled Fallback compiled in default value char val Option s value char label Label for field in connect dialog char dispchar Character to display for this field in a connect dialog Values are nat Display entered value as is xen Pas
74. values below We will do this with a select statement Now we re ready to update pg_amop with our new operator class The most important thing in this entire discussion is that the operators are ordered from less equal through greater equal in pg_amop We add the instances we need INSERT INTO pg amop amopid amopclaid amopopr amopstrategy amopselect amopnpages SELECT am oid opcl oid c opoid 1 btreesel regproc btreenpage regproc FROM pg am am pg opclass opcl complex abs ops tmp c WHERE amname btree AND opcname complex abs ops AND c oprname lt Now do this for the other operators substituting for the 1 in the third line above and the lt in the last line Note the order less than is 1 less than or equal is 2 equal is 3 greater than or equal is 4 and greater than is 5 The next step is registration of the support routine previously described in our discussion of pg_am The oid of this support routine is stored in the pg_amproc class keyed by the access method oid and the operator class oid First we need to register the function in Postgres recall that we put the C code that implements this routine in the bottom of the file in which we implemented the operator routines CREATE FUNCTION complex abs cmp complex complex RETURNS int4 AS PGROOT tutorial obj complex so LANGUAGE c SELECT oid proname FROM pg proc WHERE proname complex abs cmp Again your oid numb
75. which holds the name of the column and a pointer to a VAR node There are two important numbers in the VAR node The field varno gives the position of the relation containing the current attribute in the range table entry list created above The field varattno gives the position of the attribute within the relation If the name of an attribute cannot be found an error will be returned and the query processing will be aborted The Postgres Rule System Postgres supports a powerful rule system for the specification of views and ambiguous view updates Originally the Postgres rule system consisted of two implementations The first one worked using tuple level processing and was implemented deep in the executor The rule system was called whenever an individual tuple had been accessed This implementation was removed in 1995 when the last official release of the Postgres project was transformed into Postgres95 The second implementation of the rule system is a technique called query rewriting The rewrite system is a module that exists between the parser stage and the planner optimizer This technique is still implemented For information on the syntax and creation of rules in the Postgres system refer to The PostgreSQL User s Guide The Rewrite System The query rewrite system is a module between the parser stage and the planner optimizer It processes the tree handed back by the parser stage which represents a user query and if there is a r
76. 0 char POfname PGresult res int field index PQfnumber Returns the field attribute index associated with the given field name int POfnumber PGresult res char field name is returned if the given name does not match any field PQftype Returns the field type associated with the given field index The integer returned is an internal coding of the type Field indices start at 0 Oid POftype PGresult res int field num POfsize Returns the size in bytes of the field associated with the given field index Field indices start at O int POfsize PGresult res int field index 121 Chapter 16 libpq PQfsize returns the space allocated for this field in a database tuple in other words the size of the server s binary representation of the data type 1 is returned if the field is variable size PQfmod Returns the type specific modification data of the field associated with the given field index Field indices start at 0 int POfmod PGresult res int field index PQgetvalue Returns a single field attribute value of one tuple of a PGresult Tuple and field indices start at O char POgetvalue PGresult res int tup_num int field num For most queries the value returned by PQgetvalue is a null terminated ASCII string representation of the attribute value But if PObinaryTuples is TRUE the value returned by PQgetvalue is the binary representation of the type in the internal format of the backend serv
77. 0 POdisplayTuples Prints out all the tuples and optionally the attribute names to the specified output stream void POdisplayTuples PGresult res FILE fout output stream int fillAlign space fill to align columns const char fieldSep field separator int printHeader display headers int quiet suppress print of row count at end POdisplayTuples was intended to supersede PQprintTuples and is in turn superseded by PQprint PQclear Frees the storage associated with the PGresult Every query result should be freed via PQclear when it is no longer needed void PQclear PQresult res You can keep a PGresult object around for as long as you need it it does not go away when you issue a new query nor even if you close the connection To get rid of it you must call PQclear Failure to do this will result in memory leaks in the frontend application PQmakeEmptyPGresult Constructs an empty PGresult object with the given status PGresult PQmakeEmptyPGresult PGconn conn ExecStatusType status This is libpq s internal routine to allocate and initialize an empty PGresult object It is exported because some applications find it useful to generate result objects particularly objects with error status themselves If conn is not NULL and status indicates an error the connection s current errorMessage is copied into the PGresult Note that PQclear should eventually be called on the object just as with a P
78. 01 Any Y2K problems in the underlying OS related to obtaining the current time may propagate into apparent Y2K problems in Postgres Refer to The Gnu Project http www gnu org software year2000 html and The Perl Institute http language perl com news y2k html for further discussion of Y2K issues particularly as it relates to open source no fee software Copyrights and Trademarks PostgreSQL is O 1996 9 by the PostgreSQL Global Development Group and is distributed under the terms of the Berkeley license Postgres95 is O 1994 5 by the Regents of the University of California Permission to use copy modify and distribute this software and its documentation for any purpose without fee and without a written agreement is hereby granted provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies In no event shall the University of California be liable to any party for direct indirect special incidental or consequential damages including lost profits arising out of the use of this software and its documentation even if the University of California has been advised of the possibility of such damage The University of California specifically disclaims any warranties including but not limited to the implied warranties of merchantability and fitness for a particular purpose The software provided hereunder is on an as is basis and the University of California has no oblig
79. 1 4 N 1 5 1 5 1 6 then the tag TAG will reference filel 1 2 file2 1 3 etc Note For creating a release branch other then a b option added to the command it s the same thing So to create the v6 4 release I did the following cd pgsql cvs tag b REL6 4 225 Appendix DG1 The CVS Repository which will create the tag and the branch for the RELEASE tree Now for those with CVS access it s too simple First create two subdirectories RELEASE and CURRENT so that you don t mix up the two Then do cd RELEASE cvs checkout P r REL6 4 pgsql cd CURRENT cvs checkout P pgsql which results in two directory trees RELEASE pgsql and CURRENT pgsql From that point on CVS will keep track of which repository branch is in which directory tree and will allow independent updates of either tree If you are only working on the CURRENT source tree you just do everything as before we started tagging release branches After you ve done the initial checkout on a branch cvs checkout r REL6 4 anything you do within that directory structure is restricted to that branch If you apply a patch to that directory structure and do a cvs commit while inside of it the patch is applied to the branch and only the branch Getting The Source Via Anonymous CVS If you would like to keep up with the current sources on a regular basis you can fetch them from our CVS server and then use CVS to retrieve updates from time
80. 8 Chapter 18 pgtcl pg_disconnect Name pg_disconnect closes a connection to the backend server Synopsis pg disconnect dbHandle Inputs dbHandle Specifies a valid database handle Outputs None Description pg disconnect closes a connection to the Postgres backend 149 Chapter 18 pgtcl pg conndefaults Name pg conndefaults obtain information about default connection parameters Synopsis pg conndefaults Inputs None Outputs option list The result is a list describing the possible connection options and their current default values Each entry in the list is a sublist of the format optname label dispchar dispsize value where the optname is usable as an option in pg_connect conninfo Description pg_conndefaults returns info about the connection options available in pg_connect conninfo and the current default value for each option Usage pg_conndefaults 150 Chapter 18 pgtcl pg exec Name pg_exec send a query string to the backend Synopsis pg exec dbHandle queryString Inputs dbHandle Specifies a valid database handle queryString Specifies a valid SQL query Outputs resultHandle A Tcl error will be returned if Pgtcl was unable to obtain a backend response Otherwise a query result object is created and a handle for it is returned This handle can be passed to pg result to obtain the results of the query Description pg exec submits a query to the Post
81. 9 3 pg_amproc Schema o gt amopid the oid of the pg_am instance for B tree 403 see above amopclaid the oid of the pg_opclass instance for complex_abs_ops whatever you got instead of 17314 see above amopopr the oids of the operators for the opclass which we ll get in just a minute The cost functions are used by the query optimizer to decide whether or not to use a given index in a scan Fortunately these already exist The two functions we ll use are btreesel which estimates the selectivity of the B tree and btreenpage which estimates the number of pages a search will touch in the tree So we need the oids of the operators we just defined We ll look up the names of all the operators that take two complexes and pick ours out SELECT o oid AS opoid o oprname INTO TABLE complex ops tmp FROM pg operator o pg type t WHERE o oprleft t oid and o oprright t oid and t typname complex abs 4 oid oprname 17321 lt 17322 lt 17323 17324 gt 17325 gt 4R 56 Chapter 9 Interfacing Extensions To Indices Again some of your oid numbers will almost certainly be different The operators we are interested in are those with oids 17321 through 17325 The values you get will probably be different and you should substitute them for the
82. ABLE shoe data shoename char 10 sh avail integer a slcolor char 10 slminlen float slmaxlen float slunit char 8 CREATE TABLE shoelace data sl name char 10 Sl avail integer sl color char 10 sl len float Sl unit char 8 ki CREATE TABLE unit un_name un fact char 8 float primary key available of pairs preferred shoelace color miminum shoelace length maximum shoelace length length unit primary key available of pairs shoelace color shoelace length length unit the primary key factor to transform to cm I think most of us wear shoes and can realize that this is really useful data Well there are shoes out in the world that don t require shoelaces but this doesn t make AT s life easier and so we ignore it The views are created as CREATE VIEW shoe AS SELECT sh shoename sh sh avail sh slcolor sh slminlen sh slminlen un un_fact AS sh slmaxlen sh slmaxlen un un fact AS sh slunit FROM shoe data sh unit un WHERE sh slunit un un name CREATE VIEW shoelace AS SELECT s sl name Sl avail Sl color Sl len sl unit nann slminlen_cm slmaxlen_cm s sl_len u un fact AS sl_len_cm FROM shoelace_data s unit u WHERE s sl_unit u un_name CREATE VIEW shoe_ready AS SELECT rsh shoename rsh sh avail rsl sl name rsl sl avail min rsh sh avail rsl sl avail AS FROM shoe rsh shoelace rsl WHER
83. D shoelace NEW shoelace data s unit u WHERE bpchareq s sl unit u un name And in step 3 it replaces all the variables in the parsetree that reference the rangetable entry the one for shoelace that is currently processed by the corresponding targetlist expressions from the rule action This results in the final query SELECT s sl name s sl avail S sl color s sl len s sl unit float8mul s sl len u un fact AS sl len cm FROM shoelace shoelace shoelace OLD shoelace NEW shoelace data s unit u WHERE bpchareq s sl unit u un name Turning this back into a real SQL statement a human user would type reads SELECT s sl name s sl avail S sl color s sl len s sl unit s sl len u un fact AS sl len cm FROM shoelace data s unit u WHERE sS sl unit u un name That was the first rule applied While this was done the rangetable has grown So the rule system continues checking the range table entries The next one is number 2 shoelace OLD Relation shoelace has a rule but this rangetable entry isn t referenced in any of the variables of the parsetree so it is ignored Since all the remaining rangetable entries either have no rules in pg rewrite or aren t referenced it reaches the end of the rangetable Rewriting is complete and the above is the final result given into the optimizer The optimizer ignores the extra rangetable entries that aren t referenced by variables in the parsetree and the plan produced by the
84. DATE the missing columns from t1 are added to the targetlist by the optimizer and the final parsetree will read as UPDATE t1 SET a tl a b t2 b WHERE tl a t2 a and thus the executor run over the join will produce exactly the same result set as a SELECT tl a t2 b FROM t1 t2 WHERE tl a t2 a will do But there is a little problem in UPDATE The executor does not care what the results from the join it is doing are meant for It just produces a result set of rows The difference that one is a SELECT command and the other is an UPDATE is handled in the caller of the executor The caller still knows looking at the parsetree that this is an UPDATE and he knows that this result should go into table t1 But which of the 666 rows that are there has to be replaced by the new row The plan executed is a join with a qualification that potentially could produce any number of rows between 0 and 666 in unknown order To resolve this problem another entry is added to the targetlist in UPDATE and DELETE statements The current tuple ID ctid This is a system attribute with a special feature It contains the block and position in the block for the row Knowing the table the ctid can be used to find one specific row in a 1 5GB sized table containing millions of rows by fetching one single data block After adding the ctid to the targetlist the final result set could be defined as SELECT tl a t2 b tl ctid FROM t1 t2 WHERE tl a t2 a Now another
85. DEBUG have table C relname will print a DEBUG log message for every row of pg class The return value of spi exec is the number of rows affected by query as found in the global variable SPI processed 74 Chapter 11 Procedural Languages Spi prepare query typelist Prepares AND SAVES a query plan for later execution It is a bit different from the C level SPI prepare in that the plan is automatically copied to the toplevel memory context Thus there is currently no way of preparing a plan without saving it If the query references arguments the type names must be given as a Tcl list The return value from spi prepare is a query ID to be used in subsequent calls to spi execp See spi execp for a sample spi exec count n array name nulls str query valuelist loop body Execute a prepared plan from spi prepare with variable substitution The optional count value tells spi execp the maximum number of rows to be processed by the query The optional value for nulls is a string of spaces and n characters telling spi execp which of the values are NULL s If given it must have exactly the length of the number of values The queryid is the ID returned by the spi prepare call If there was a typelist given to spi prepare a Tcl list of values of exactly the same length must be given to spi execp after the query If the type list on spi prepare was empty this argument must be omitted If the query is a SELECT statement
86. Data tg event describes event for which the function is called You may use the following macros to examine tg event TRIGGER FIRED BEFORE event returns TRUE if trigger fired BEFORE TRIGGER FIRED AFTER event returns TRUE if trigger fired AFTER TRIGGER FIRED FOR ROW event returns TRUE if trigger fired for ROW level event TRIGGER FIRED FOR STATEMENT event returns TRUE if trigger fired for STATEMENT level event TRIGGER FIRED BY INSERT event returns TRUE if trigger fired by INSERT TRIGGER FIRED BY DELETE event returns TRUE if trigger fired by DELETE TRIGGER FIRED BY UPDATE event returns TRUE if trigger fired by UPDATE tg relation is pointer to structure describing the triggered relation Look at src include utils rel h for details about this structure The most interest things are tg relation rd att descriptor of the relation tuples and tg relation rd rel relname relation s name This is not char but NameData Use SPI getrelname tg relation to get char T you need a copy of name tg trigtuple 80 Chapter 13 Triggers is a pointer to the tuple for which the trigger is fired This is skip the tuple being inserted if INSERT deleted if DELETE or updated if UPDATE If INSERT DELETE then this is what you are to return to Executor if you don t want to replace tuple with another one INSERT or the operation tg newtuple is a pointer to the new version of tuple if UPDATE a
87. E EEKE EEE DAAE EES 68 Excepto ista 69 EXAMS s 2o Rte OUR RU ae UR P eed ep 69 Some Simple PL pgSQL Functions eese 70 PL pgSQL Function on Composite Type serene 70 PL pgSQL Trigger Procedure eese 70 A rb teter po RE EH e E ERE 71 Qusau E M shee 71 D eSGEIDUOTL sse eri creer te i er d d 71 Postgres Functions and Tcl Procedure Names esee 71 Defining Functions in PL T cl esee 71 Global Datam Pl TA tio a 72 Trigger Proced res am Pl Tolar paia 12 Database Access from PL Tcl eese nennen nennen 74 12 Linking Dynamically Loaded Functions ooooomomosssis 76 UL TREIX obese E DS RPM OH RII 77 DEC OSEZL ci Ieri Heer li 77 SunOS 4 x Solaris 2 x and HP UX soniri nn eara AEE EINER Erat C E ENa 78 O 79 Trigger NN 79 Interaction with the Trigger Manager enne 80 Visibility of Data Changes enne tette ptr ree t tete 81 Exarhples i sist P ADU GU RU De nete 82 14 Server Programming Interface cere e eee eee eee eee eene eene tns en aetas etas ta seta sense ta sea soo 85 Interface Functions s iret rete epe eoe e petere te te e prete te Ue aee etus 86 11 Rd ect BRO 0 11 DS ad 86 O OO 87 SPIE CXC 33555 capes ess sees aaah eae as aaah aso RS ede Hee 89 SPI Prepa ii E E 91 SPL cH 92 NO ES 93 Interface Support Functions ssteissi esee esre ees ee eE EEE
88. E rennen nee 95 SPIE copytuple en iiit tob IER RENE E E O AR EE S 95 SPI omodifyt ple TS 96 A EEEE 97 SPE D T EAE EAEE E a 98 SPI g tvalue certet iret ne eR re ete A eer 99 SPI setbinval uoo E e dm EE RENE PA andaba sede ds 100 RN 100 NR ette ur eU eie 102 NANA ete oe eese d OR SEE REM E PRESS ERR NEE 103 juil LEN 104 SPI xepalloc uiii e titm ERE EGRE RE UR TRE 105 SPI pfree s oen IR HH PM 106 Memory Manage bernal acie p rene pie e ere mtb et erties 106 Visibility of Data Changes encuentre geo o e ques 107 Examples ote HR aia 107 INEO A 110 Historical Note nuce et ie NR ERREUR CR RE NER b e trie eos 110 Inversion Large Objects nieder o UI e eerte 110 Large Object Interfaces oso Ret eto ot iar eec e 110 Creating a Large Object 4 a ite et ote ere eg Abbess ein 111 Importing a Large Object nonocnee eni eot ate ber e ertet 111 Exporting a Large Object ssesseeeeeeeeeeeeeeeee nennen nennen nennen 111 Opening an Existing Large Object essere nenne 111 Writing Data to a Large Object sss 111 Seeking on a Large Object e oare e E eene rennen enne 112 Closing a Large Object Descriptor nennen nre 112 Built in registered functions 4 iet aa E e HERREN ados EERTE Ea E hs 112 Accessing Large Objects from LIBPO oooconccnncccoconoccnnnnononnnonnnnononanonncrnnnonncnnncononnncnnnons 112 Sample ea ei Aat ee e ei
89. E rsl sl color rsh slcolor total avail rsl sl len cm rsh slminlen cm rsl sl len cm rsh slmaxlen cm 3l Chapter 8 The Postgres Rule System The CREATE VIEW command for the shoelace view which is the simplest one we have will create a relation shoelace and an entry in pg_rewrite that tells that there is a rewrite rule that must be applied whenever the relation shoelace is referenced in a queries rangetable The rule has no rule qualification discussed in the non SELECT rules since SELECT rules currently cannot have them and it is INSTEAD Note that rule qualifications are not the same as query qualifications The rules action has a qualification The rules action is one querytree that is an exact copy of the SELECT statement in the view creation command Note The two extra range table entries for NEW and OLD named NEW and CURRENT for historical reasons in the printed querytree you can see in the pg rewrite entry aren t of interest for SELECT rules Now we populate unit shoe data and shoelace data and AI types the first SELECT in his life al bundy INSERT INTO unit VALUES cm 1 0 al bundy INSERT INTO unit VALUES m 100 0 al bundy INSERT INTO unit VALUES inch 2 54 al bundy al bundy INSERT INTO shoe data VALUES al_bundy gt sh1 2 black 70 0 90 0 cm al bundy INSERT INTO shoe data VALUES al_bundy gt sh2 0 black 30 0
90. Gconn PQsetdbLogin const char pghost const char pgport const char pgoptions const char pgtty const char dbName const char login const char pwd If any argument is NULL then the corresponding environment variable see Environment Variables section is checked If the environment variable is also not set then hardwired defaults are used The return value is a pointer to an abstract struct representing the connection to the backend PQsetdb Makes a new connection to a backend PGconn PQOsetdb char pghost char pgport char pgoptions char pgtty char dbName This is a macro that calls POsetdbLogin with null pointers for the login and pwd parameters It is provided primarily for backward compatibility with old programs PQconnectdb Makes a new connection to a backend PGconn POconnectdb const char conninfo 117 Chapter 16 libpq This routine opens a new database connection using parameters taken from a string Unlike PQsetdbLogin the parameter set can be extended without changing the function signature so use of this routine is encouraged for new application programming The passed string can be empty to use all default parameters or it can contain one or more parameter settings separated by whitespace Each parameter setting is in the form keyword value To write a null value or a value containing spaces surround it with single quotes eg keyword a value Single quotes within the
91. Gresult returned by libpq itself 123 Chapter 16 libpq Asynchronous Query Processing The PQexec function is adequate for submitting queries in simple synchronous applications It has a couple of major deficiencies however PQexec waits for the query to be completed The application may have other work to do such as maintaining a user interface in which case it won t want to block waiting for the response Since control is buried inside PQexec it is hard for the frontend to decide it would like to try to cancel the ongoing query It can be done from a signal handler but not otherwise PQexec can return only one PGresult structure If the submitted query string contains multiple SQL commands all but the last PGresult are discarded by PQexec Applications that do not like these limitations can instead use the underlying functions that PQexec is built from PQsendQuery and PQgetResult PQsendQuery Submit a query to Postgres without waiting for the result s TRUE is returned if the query was successfully dispatched FALSE if not in which case use PQerrorMessage to get more information about the failure int POsendQuery PGconn conn const char query After successfully calling PQsendQuery call PQgetResult one or more times to obtain the query results POsendQuery may not be called again on the same connection until PQgetResult has returned NULL indicating that the query is done PQgetResult Wait for the next result
92. Guide to Evolutionary Computation Jrg Heitktter and David Beasley InterNet resource The Design and Implementation of the Postgres Query Optimizer Z Fong University of California Berkeley Computer Science Department Fundamentals of Database Systems R Elmasri and S Navathe The Benjamin Cummings Pub Inc FAQ in comp ai genetic news comp ai genetic is available at Encore ftp ftp Germany EU net pub research softcomp EC Welcome html File planner Report ps in the postgres papers distribution 202 Chapter 25 Frontend Backend Protocol Note Written by Phil Thompson mailto phil river bank demon co uk Updates for protocol 2 0 by Tom Lane mailto tgl 2sss pgh pa us Postgres uses a message based protocol for communication between frontends and backends The protocol is implemented over TCP IP and also on Unix sockets Postgres v6 3 introduced version numbers into the protocol This was done in such a way as to still allow connections from earlier versions of frontends but this document does not cover the protocol used by those earlier versions This document describes version 2 0 of the protocol implemented in Postgres v6 4 and later Higher level features built on this protocol for example how libpq passes certain environment variables after the connection is established are covered elsewhere Overview The three major components are the frontend running on the client and the postmaster and backend runnin
93. I ERROR TRANSACTION if BEGIN ABORT END SPI ERROR OPUNKNOWN if type of query is unknown this shouldn t occur Algorithm SPI exec performs the following Disconnects your procedure from the SPI manager and frees all memory allocations made by your procedure via palloc since the SPI connect These allocations can t be used any more See Memory management 90 Chapter 14 Server Programming Interface SPI_prepare Name SPI_prepare Connects your procedure to the SPI manager Synopsis SPI_prepare query nargs argtypes Inputs query Query string nargs Number of input parameters 1 nargs as in SQL functions argtypes Pointer list of type OIDs to input arguments Outputs void Pointer to an execution plan parser planner optimizer Description SPI_prepare creates and returns an execution plan parser planner optimizer but doesn t execute the query Should only be called from a connected procedure Usage nargs is number of parameters 1 nargs as in SQL functions and nargs may be 0 only if there is not any 1 in query Execution of prepared execution plans is sometimes much faster so this feature may be useful if the same query will be executed many times The plan returned by SPI_prepare may be used only in current invocation of the procedure since SPI_finish frees memory allocated for a plan See SPI_saveplan If successful a non null pointer will be returned Otherwise you ll get a
94. I has no ability to automatically free allocations in the upper Executor context SPI automatically frees memory allocated during execution of a query when this query is done Visibility of Data Changes Postgres data changes visibility rule during a query execution data changes made by the query itself via SQL function SPI function triggers are invisible to the query scan For example in query INSERT INTO a SELECT FROM a tuples inserted are invisible for SELECT scan In effect this duplicates the database table within itself subject to unique index rules of course without recursing Changes made by query Q are visible by queries which are started after query Q no matter whether they are started inside Q during the execution of Q or after Q is done Examples This example of SPI usage demonstrates the visibility rule There are more complex examples in in src test regress regress c and in contrib spi This is a very simple example of SPI usage The procedure execq accepts an SQL query in its first argument and tcount in its second executes the query using SPI_exec and returns the number of tuples for which the query executed include executor spi h this is what you need to work with SPI int execq text sql int cnt int execq text sql int cnt int ret int proc 0 SPI connect ret SPI exec textout sql cnt proc SPI processed If this is SELECT and some tuple s fetched r
95. I manager and discards the result Identifiers like local variables are still substituted into parameters 66 Chapter 11 Procedural Languages Returning from the function RETURN expression The function terminates and the value of expression will be returned to the upper executor The return value of a function cannot be undefined If control reaches the end of the toplevel block of the function without hitting a RETURN statement a runtime error will occur The expressions result will be automatically casted into the functions return type as described for assignments Aborting and messages As indicated in the above examples there is a RAISE statement that can throw messages into the Postgres elog mechanism RAISE level format identifier Inside the format is used as a placeholder for the subsequent comma separated identifiers Possible levels are DEBUG silently suppressed in production running databases NOTICE written into the database log and forwarded to the client application and EXCEPTION written into the database log and aborting the transaction Conditionals IF expression THEN statements ELSE statements END IF The expression must return a value that at least can be casted into a boolean type Loops There are multiple types of loops lt lt label gt gt LOOP statements END LOOP An unconditional loop that must be terminated explicitly by an EXIT statement The optional label ca
96. N classname Open the class called classname for further manipulation 220 Chapter 28 Backend Interface CLOSE classname Close the open class called classname It is an error if classname is not already opened If no classname is given then the currently open class is closed PRINT Print the currently open class INSERT OID oid_value valuel value2 Insert a new instance to the open class using value1 value2 etc for its attribute values and oid_value for its OID If oid_value is not O then this value will be used as the instance s object identifier Otherwise it is an error INSERT value value As above but the system generates a unique object identifier CREATE classname namel typel name2 type2 Create a class named classname with the attributes given in parentheses OPEN namel typel name2 type2 AS classname Open a class named classname for writing but do not record its existence in the system catalogs This is primarily to aid in bootstrapping DESTROY classname Destroy the class named classname DEFINE INDEX indexname ON class_name USING amname opclass attr function attr Create an index named indexname on the class named classname using the amname access method The fields to index are called name1 name2 etc and the operator collections to use are collection_1 collection_2 etc respectively Note This last sentence doesn t reference anything
97. NIX tools yacc and lex The transformation process does modifications and augmentations to the data structures returned by the parser Parser The parser has to check the query string which arrives as plain ASCII text for valid syntax If the syntax is correct a parse tree is built up and handed back otherwise an error is returned For the implementation the well known UNIX tools lex and yacc are used The lexer is defined in the file scan and is responsible for recognizing identifiers the SQL keywords etc For every keyword or identifier that is found a token is generated and handed to the parser The parser is defined in the file gram y and consists of a set of grammar rules and actions that are executed whenever a rule is fired The code of the actions which is actually C code is used to build up the parse tree The file scan l is transformed to the C source file scan c using the program lex and gram y is transformed to gram c using yacc After these transformations have taken place a normal 191 Chapter 22 Overview of PostgreSQL Internals C compiler can be used to create the parser Never make any changes to the generated C files as they will be overwritten the next time lex or yacc is called Note The mentioned transformations and compilations are normally done automatically using the makefiles shipped with the Postgres source distribution A detailed description of yacc or the grammar rules given in gram y would be bey
98. NULL if fnumber is out of range SPI result set to SPI ERROR NOATTRIBUTE on error Description SPI fname returns the attribute name for the specified attribute Usage Attribute numbers are 1 based Algorithm Returns a newly allocated copy of the attribute name 98 Chapter 14 Server Programming Interface SPI_getvalue Name SPI_getvalue Returns the string value of the specified attribute Synopsis SPI_getvalue tuple tupdesc fnumber Inputs HeapTuple tuple Input tuple to be examined TupleDesc tupdesc Input tuple description int fnumber Attribute number Outputs char Attribute value or NULL if attribute is NULL fnumber is out of range SPI_result set to SPI ERROR_NOATTRIBUTE no output function available SPI_result set to SPILERROR_NOOUTFUNC Description SPI_getvalue returns an external string representation of the value of the specified attribute Usage Attribute numbers are 1 based Algorithm Allocates memory as required by the value 99 Chapter 14 Server Programming Interface SPI_getbinval Name SPI_getbinval Returns the binary value of the specified attribute Synopsis SPI getbinval tuple tupdesc fnumber isnull Inputs HeapTuple tuple Input tuple to be examined TupleDesc tupdesc Input tuple description int fnumber Attribute number Outputs Datum Attribute binary value bool isnull flag for null value in attribute SPI result SPI ERROR NOAT
99. NULL plan In both cases SPI_result will be set like the value returned by SPI_exec except that it is set to SPI ERROR ARGUMENT if query is NULL or nargs lt 0 or nargs gt 0 amp amp argtypes is NULL 91 Chapter 14 Server Programming Interface SPI_saveplan Name SPI_saveplan Saves a passed plan Synopsis SPI_saveplan plan Inputs void query Passed plan Outputs void Execution plan location NULL if unsuccessful SPI_result SPI ERROR ARGUMENT if plan is NULL SPI ERROR UNCONNECTED if procedure is un connected Description SPI saveplan stores a plan prepared by SPI prepare in safe memory protected from freeing by SPI finish or the transaction manager In the current version of Postgres there is no ability to store prepared plans in the system catalog and fetch them from there for execution This will be implemented in future versions As an alternative there is the ability to reuse prepared plans in the consequent invocations of your procedure in the current session Use SPI execp to execute this saved plan Usage SPI saveplan saves a passed plan prepared by SPI prepare in memory protected from freeing by SPI finish and by the transaction manager and returns a pointer to the saved plan You may save the pointer returned in a local variable Always check if this pointer is NULL or not either when preparing a plan or using an already prepared plan in SPI execp see below Note If one of the objects a r
100. OSTGRESDIR on the make command line will cause LIBDIR and HEADERDIR to be rooted at the new directory you specify ODBCINST is independent of POSTGRESDIR Here is how you would specify the various destinations explicitly make BINDIR bindir LIBDIR libdir HEADERDIR headerdir install For example typing make POSTGRESDIR opt psqlodbc install after you ve used configure and make will cause the libraries and headers to be installed in the directories opt psqlodbc lib and opt psqlodbc include iodbc respectively The command 178 Chapter 20 ODBC Interface make POSTGRESDIR opt psqlodbc HEADERDIR usr local install should cause the libraries to be installed in opt psqlodbc lib and the headers in usr local include iodbc If this doesn t work as expected please contact one of the maintainers Configuration Files odbc ini contains user specified access information for the psqlODBC driver The file uses conventions typical for Windows Registry files but despite this restriction can be made to work The odbc ini file has three required sections The first is ODBC Data Sources which is a list of arbitrary names and descriptions for each database you wish to access The second required section is the Data Source Specification and there will be one of these sections for each database Each section must be labeled with the name given in ODBC Data Sources and must contain the following entries Driver POSTGRESDIR lib lib
101. Ofinish conn exit 1 main int argc char argv char in filename out filename char database Oid lobjOid PGconn conn PGresult res if argc 4 fprintf stderr Usage s database name in filename out _filename n argv 0 exit 1 database argv 1 in filename argv 2 out filename argv 3 set up the connection 4 conn PQsetdb NULL NULL NULL NULL database check to see that the backend connection was successful if POstatus conn CONNECTION BAD fprintf stderr Connection to database fprintf stderr s PQerrorMessage conn exit nicely conn s failed n database EG res PQexec conn begin Poclear res printf importing file s n in filename lobjOid importFile conn in filename lobjoid lo import conn in filename 115 x Chapter 15 Large Objects printf as large object d n lobjOid printf picking out bytes 1000 2000 of the large objectin pickout conn lobjOid 1000 1000 printf overwriting bytes 1000 2000 of the large object with X s n overwrite conn lobjOid 1000 1000 printf exporting large object to file s n out filename exportFile conn lobjOid out filename lo export conn lobjOid out filename res PQexec conn end Poclear res POfinish conn exit 0 116 Chapter 16 libpq libpq is the C application programmer s interface to Postg
102. PL pgSQL parser to identify real constant values other than the NULL keyword All expressions are evaluated internally by executing a query SELECT expression using the SPI manager In the expression occurences of variable identifiers are substituted by parameters and the actual values from the variables are passed to the executor in the parameter array All expressions used in a PL pgSQL function are only prepared and saved once The type checking done by the Postgres main parser has some side effects to the interpretation of constant values In detail there is a difference between what the two functions CREATE FUNCTION logfuncl text RETURNS datetime AS DECLARE logtxt ALIAS FOR 1 BEGIN INSERT INTO logtable VALUES logtxt now RETURN now END LANGUAGE plpgsql and CREATE FUNCTION logfunc2 text RETURNS datetime AS DECLARE logtxt ALIAS FOR 1 curtime datetime BEGIN curtime now INSERT INTO logtable VALUES logtxt curtime RETURN curtime END LANGUAGE plpgsqdl do In the case of logfuncl the Postgres main parser knows when preparing the plan for the INSERT that the string now should be interpreted as datetime because the target field of logtable is of that type Thus it will make a constant from it at this time and this constant value is then used in all invocations of logfunc1 during the lifetime of the backend Needless to say that this isn t what the programmer wanted In the c
103. PostgreSQL Programmer s Guide The PostgreSQL Development Team Edited by Thomas Lockhart PostgreSQL Programmer s Guide by The PostgreSQL Development Team Edited by Thomas Lockhart PostgreSQL is Copyright O 1996 9 by the Postgres Global Development Group Table of Contents SUMIMALY AO E A E O A E O E E E O O A T i 1 Introduction t PA EAE E E A E E 1 RESQUICOS E EE E EE E EEE EE EEEE E 1 Terminology OTTO 2 O NN 3 Y2K Statement nt Pe Here et Pee eee 3 Copyrights and Tradematks 2 cte brem erred tte na 4 j nn 5 Postgres Architectural Concepts esses eene tii rennen enne 5 3 Extending SQL An Overvie Wense 7 How Extensibility W Orks os eei hr rire e aee er retro e eee eere erede etes 7 Th Postgres Type System tp rte te de e ep repre pe aae Sres 7 About the Postgres System Catalogs essere enne ens 8 4 Extending SQL Functions 4 eee esee eerte eres seen ette enses eene eo satt netos etna etas tne ta setas etas e tn sena 11 Query Language SQL Functions eeeeseeseeeeeeeeeee enne E ener entente tnnt 11 SQL Functions on Base Types eese ne rre rn corn con nero cerrar ona ron cronos 11 SQL Functions on Composite Types eene 12 Programming Language Functions coooccoccconcconconaconocononnnonnccnnocnnonncon nono nonnnonnc ener eneenne tn 13 Program
104. Reo o ERES 242 Alternate Toolsets t e detener Entente He ee ire bereitet A pete nere vene aute rie EEG 243 Bibliography t M SUR 244 vii List of Tables 3 1 Postgres System Catalonia o eere pete hee aee iis 8 9 1 Index Schema vitral iei 52 9 2 B tree Strategles eet OU e Io ie PERDE epe eR PH qoe dea 53 9 3 pg amproCc S Chema cete eee tee be te ne Oe p Ut tm e Te ep Hte e ERR 56 18 1 pgtcl Commands eeu eet tpe eR 146 20 1 Postgres Signals debe HIER ROO dd ESEE E E 217 29 1 Sample Page Lea yOut aee Ree dit 223 DG2 1 Postgres Documentation Products esessesesesesseeeeeeeeeee nennen 233 viii List of Figures 2 1 How a connection is established ooncononocinococonononoccconcconnaconnnonnnacnnonconnncnn cono nnne enne nne 6 3 1 The major Postgres system catalogs nennen nennen nennen 9 1X Summary Postgres developed originally in the UC Berkeley Computer Science Department pioneered many of the object relational concepts now becoming available in some commercial databases It provides SQL92 SQL3 language support transaction integrity and type extensibility PostgreSQL is a public domain open source descendant of this original Berkeley code Chapter 1 Introduction This document is the programmer s manual for the PostgreSQL http postgresql org database management system originally developed at the University of California at Berkeley PostgreSQL is
105. Scan on software Hash Index Scan using comp hostidx on computer The other possible query is a DELETE FROM computer WHERE hostname old with the execution plan Nestloop Index Scan using comp hostidx on computer Index Scan using soft hostidx on software This shows that the optimizer does not realize that the qualification for the hostname on computer could also be used for an index scan on software when there are multiple qualification expressions combined with AND what he does in the regexp version of the query The trigger will get invoked once for any of the 2000 old computers that have to be deleted and that will result in one index scan over computer and 2000 index scans for the software The rule implementation will do it with two queries over indices And it depends on the overall size of the software table if the rule will still be faster in the seqscan situation 2000 query executions over the SPI manager take some time even if all the index blocks to look them up will soon appear in the cache The last query we look at is a DELETE FROM computer WHERE manufacurer bim Again this could result in many rows to be deleted from computer So the trigger will again fire many queries into the executor But the rule plan will again be the Nestloop over two IndexScan s Only using another index on computer Nestloop Index Scan using comp manufidx on computer Index Scan using soft hostidx on software resu
106. Stonebraker L A Rowe and M Hirohama March 1990 Transactions on Knowledge and Data Engineering 2 1 IEEE On Rules Procedures Caching and Views in Database Systems Stonebraker et al ACM 1990 M Stonebraker and et al June 1990 Conference on Management of Data ACM SIGMOD 245
107. TAL DEBUG and NOIND like for the elog C function quote string Duplicates all occurences of single quote and backslash characters It should be used when variables are used in the query string given to spi_exec or spi_prepare not for the value list on spi_execp Think about a query string like SELECT val AS ret where the Tcl variable val actually contains doesn t This would result in the final query string SELECT doesn t AS ret what would cause a parse error during spi exec or spi prepare It should contain SELECT doesn t AS ret and has to be written as SELECT quote val AS ret spi exec count n array name query loop body Call parser planner optimizer executor for query The optional count value tells spi exec the maximum number of rows to be processed by the query If the query is a SELECT statement and the optional loop body a body of Tcl commands like in a foreach statement is given it is evaluated for each row selected and behaves like expected on continue break The values of selected fields are put into variables named as the column names So a Spi exec SELECT count AS cnt FROM pg proc will set the variable cnt to the number of rows in the pg proc system catalog If the option array is given the column values are stored in the associative array named name indexed by the column name instead of individual variables Spi exec array C SELECT FROM pg class elog
108. TO pg opclass opcname opcdeftype SELECT complex abs ops oid FROM pg type WHERE typname complex abs SELECT oid opcname opcdeftype FROM pg opclass WHERE opcname complex abs ops oid opcname opcdeftype 17314 complex_abs_ops 29058 Note that the oid for your pg_opclass instance will be different Don t worry about this though We ll get this number from the system later just like we got the oid of the type here So now we have an access method and an operator class We still need a set of operators the procedure for defining operators was discussed earlier in this manual For the complex_abs_ops operator class on Btrees the operators we require are absolute value less than absolute value less than or equal absolute value equal absolute value greater than or equal absolute value greater than Suppose the code that implements the functions defined is stored in the file PGROOT src tutorial complex c Part of the code look like this note that we will only show the equality operator for the rest of the examples The other four operators are very similar Refer to complex c or complex source for the details Hdefine Mag c c x c x c y c y 54 Chapter 9 Interfacing Extensions To Indices bool complex abs eq Complex a Complex b double amag Mag
109. TRIBUTE 100 Chapter 14 Server Programming Interface Description SPI_getbinval returns the binary value of the specified attribute Usage Attribute numbers are 1 based Algorithm Does not allocate new space for the binary value SPI_gettype Name SPI_gettype Returns the type name of the specified attribute Synopsis SPI_gettype tupdesc fnumber Inputs TupleDesc tupdesc Input tuple description int fnumber Attribute number Outputs char The type name for the specified attribute number SPI_result SPI_ERROR_NOATTRIBUTE 101 Chapter 14 Server Programming Interface Description SPI_gettype returns a copy of the type name for the specified attribute Usage Attribute numbers are 1 based Algorithm Does not allocate new space for the binary value SPI_gettypeid Name SPI_gettypeid Returns the type OID of the specified attribute Synopsis SPI gettypeid tupdesc fnumber Inputs TupleDesc tupdesc Input tuple description int fnumber Attribute number Outputs OID The type OID for the specified attribute number SPI result SPI ERROR NOATTRIBUTE 102 Chapter 14 Server Programming Interface Description SPI_gettypeid returns the type OID for the specified attribute Usage Attribute numbers are 1 based Algorithm TBD SPI_getrelname Name SPI_getrelname Returns the name of the specified relation Synopsis SPI_getrelname rel Inputs Relation rel Input
110. The ambition is to make this section contain things for those that want to have a look inside and the section on How to use it should be enough for all normal questions So read this before looking at the internals of the ecpg If you are not interested in how it really works skip this section ToDo List This version the preprocessor has some flaws Library functions to_date et al do not exists But then Postgres has some good conversion routines itself So you probably won t miss these Structures ans unions Structures and unions have to be defined in the declare section Missing statements The following statements are not implemented thus far exec sql allocate exec sql deallocate 169 Chapter 19 ecpg Embedded SOL in C SQLSTATE message no data found The error message for no data in an exec sql insert select from statement has to be 100 sqlwarn 6 sqlwarn 6 should be W if the PRECISION or SCALE value specified in a SET DESCRIPTOR statement will be ignored The Preprocessor The first four lines written to the output are constant additions by ecpg These are two comments and two include lines necessary for the interface to the library Then the preprocessor works in one pass only reading the input file and writing to the output as it goes along Normally it just echoes everything to the output without looking at it further When it comes to an EXEC SQL statements it intervenes and changes them
111. The mergejoinable equality operator must have a commutator itself if the two data types are the same or a related equality operator if they are different There must be lt and gt ordering operators having the same left and right input datatypes as the mergejoinable operator itself These operators must be named and gt you do not have any choice in the matter since there is no provision for specifying them explicitly Note that if the left and right data types are different neither of these operators is the same as 24 Chapter 6 Extending SOL Operators either SORT operator But they had better order the data values compatibly with the SORT operators or mergejoin will fail to work 25 Chapter 7 Extending SQL Aggregates Aggregates in Postgres are expressed in terms of state transition functions That is an aggregate can be defined in terms of state that is modified whenever an instance is processed Some state functions look at a particular value in the instance when computing the new state sfuncl in the create aggregate syntax while others only keep track of their own internal state sfunc2 If we define an aggregate that uses only sfuncl we define an aggregate that computes a running function of the attribute values from each instance Sum is an example of this kind of aggregate Sum starts at zero and always adds the current instance s value to its running total We will use the int4pl that is
112. Threads and Servlets later in this document if you are thinking of using them as it covers some important points Using the ResultSet Interface The following must be considered when using the ResultSet interface Before reading any values you must call next This returns true if there is a result but more importantly it prepares the row for processing Under the JDBC spec you should access a field only once It s safest to stick to this rule although at the current time the Postgres driver will allow you to access a field as many times as you want You must close a ResultSet by calling close once you have finished with it Once you request another query with the Statement used to create a ResultSet the currently open instance is closed An example is as follows Statement st db createStatement ResultSet rs St executeQuery select from mytable while rs next System out print Column 1 returned System out println rs getString 1 rs close st close Performing Updates To perform an update or any other SQL statement that does not return a result you simply use the executeUpdate method St executeUpdate create table basic a int2 b int2 Closing the Connection To close the database connection simply call the close method to the Connection db close Using Large Objects In Postgres large objects also known as blobs are used to hold data in the database that cannot be s
113. UAGE c While there are ways to construct new instances or modify existing instances from within a C function these are far too complex to discuss in this manual Caveats We now turn to the more difficult task of writing programming language functions Be warned this section of the manual will not make you a programmer You must have a good understanding of C including the use of pointers and the malloc memory manager before trying to write C functions for use with Postgres While it may be possible to load functions written in languages other than C into Postgres this is often difficult when it is possible at all because other languages such as FORTRAN and Pascal often do not follow the same calling convention as C That is other languages do not pass argument and return values between functions in the same way For this reason we will assume that your programming language functions are written in C The basic rules for building C functions are as follows Most of the header include files for Postgres should already be installed in PGROOT include see Figure 2 You should always include ISPGROOT include 16 Chapter 4 Extending SOL Functions on your cc command lines Sometimes you may find that you require header files that are in the server source itself i e you need a file we neglected to install in include In those cases you may need to add one or more of ISPGROOT src backend ISPGROOT src backend include
114. UPDATE instead of original tuple Note that there is no initialization performed by the CREATE TRIGGER handler This will be changed in the future Also if more than one trigger is defined for the same event on the same relation the order of trigger firing is unpredictable This may be changed in the future If a trigger function executes SQL queries using SPI then these queries may fire triggers again This is known as cascading triggers There is no explicit limitation on the number of cascade levels If a trigger is fired by INSERT and inserts a new tuple in the same relation then this trigger will be fired again Currently there is nothing provided for synchronization etc of these cases but this may change At the moment there is function funny_dup17 in the regress tests which uses some techniques to stop recursion cascading on itself Interaction with the Trigger Manager As mentioned above when function is called by the trigger manager structure TriggerData CurrentTriggerData is NOT NULL and initialized So it is better to check CurrentTriggerData against being NULL at the start and set it to NULL just after fetching the information to prevent calls to a trigger function not from the trigger manager struct TriggerData is defined in src include commands trigger h typedef struct TriggerData TriggerEvent tg event Relation tg relation HeapTuple tg trigtuple HeapTuple tg newtuple Trigger tg trigger Trigger
115. a bmag Mag b return amag bmag There are a couple of important things that are happening below First note that operators for less than less than or equal equal greater than or equal and greater than for int4 are being defined All of these operators are already defined for int4 under the names lt lt gt and gt The new operators behave differently of course In order to guarantee that Postgres uses these new operators rather than the old ones they need to be named differently from the old ones This is a key point you can overload operators in Postgres but only if the operator isn t already defined for the argument types That is if you have lt defined for int4 int4 you can t define it again Postgres does not check this when you define your operator so be careful To avoid this problem odd names will be used for the operators If you get this wrong the access methods are likely to crash when you try to do scans The other important point is that all the operator functions return Boolean values The access methods rely on this fact On the other hand the support function returns whatever the particular access method expects in this case a signed integer The final routine in the file is the support routine mentioned when we discussed the amsupport attribute of the pg am class We will use this later on For now ignore it CREATE FUNCTION complex abs eq complex abs complex abs RETURNS bool
116. acket length then the packet itself The difference is historical Protocol This section describes the message flow There are four different types of flows depending on the state of the connection startup query function call and termination There are also special 203 Chapter 25 Frontend Backend Protocol provisions for notification responses and command cancellation which can occur at any time after the startup phase Startup Startup is divided into an authentication phase and a backend startup phase Initially the frontend sends a StartupPacket The postmaster uses this info and the contents of the pg_hba conf 5 file to determine what authentication method the frontend must use The postmaster then responds with one of the following messages ErrorResponse The postmaster then immediately closes the connection AuthenticationOk The postmaster then hands over to the backend The postmaster takes no further part in the communication AuthenticationKerberos V4 The frontend must then take part in a Kerberos V4 authentication dialog not described here with the postmaster If this is successful the postmaster responds with an AuthenticationOk otherwise it responds with an ErrorResponse AuthenticationKerberos V5 The frontend must then take part in a Kerberos V5 authentication dialog not described here with the postmaster If this is successful the postmaster responds with an AuthenticationOk otherwise it resp
117. ail AS total avail FROM shoe ready shoe ready shoe ready OLD shoe ready NEW shoe rsh shoelace rsl WHERE int4ge min rsh sh avail rsl sl avail 2 AND bpchareq rsl sl color rsh slcolor AND float8ge rsl sl len cm rsh slminlen cm AND float8le rsl sl len cm rsh slmaxlen cm i In reality the AND clauses in the qualification will be operator nodes of type AND with a left and right expression But that makes it lesser readable as it already is and there are more rules to apply So I only put them into some parantheses to group them into logical units in the order they where added and we continue with the rule for relation shoe as it is the next rangetable entry that is referenced and has a rule The result of applying it is SELECT sh shoename sh sh avail rsl sl name rsl sl avail min sh sh avail rsl sl avail AS total avail FROM shoe ready shoe ready shoe ready OLD shoe ready NEW shoe rsh shoelace rsl shoe OLD shoe NEW shoe data sh unit un WHERE int4ge min sh sh_avail rsl sl_avail 2 AND bpchareq rsl sl color sh slcolor AND float8ge rsl sl len cm float8mul sh slminlen un un fact AND float8le rsl sl len cm float8mul sh slmaxlen un un fact 34 Chapter 8 The Postgres Rule System AND bpchareq sh slunit un un_name And finally we apply the already well known rule for shoelace this time on a parsetree that is a little more complex and get SELECT sh shoename sh sh avail S sl name s
118. ample of creating an operator for adding two complex numbers We assume we ve already created the definition of type complex First we need a function that does the work then we can define the operator CREATE FUNCTION complex add complex complex RETURNS complex AS SPWD obj complex so LANGUAGE c CREATE OPERATOR leftarg complex rightarg complex procedure complex add commutator Now we can do SELECT a b AS c FROM test complex dpi A e E PSA AA A A eres 5 2 6 05 pe is Sei sep 133 42 144 95 po a Stet re We ve shown how to create a binary operator here To create unary operators just omit one of leftarg for left unary or rightarg for right unary The procedure clause and the argument clauses are the only required items in CREATE OPERATOR The COMMUTATOR clause shown in the example is an optional hint to the query optimizer Further details about COMMUTATOR and other optimizer hints appear below 20 Chapter 6 Extending SOL Operators Operator Optimization Information Author Written by Tom Lane A Postgres operator definition can include several optional clauses that tell the system useful things about how the operator behaves These clauses should be provided whenever appropriate because they can make for considerable speedups in execution of queries that use the operator But if you provide them you must be sure that they are right Incorrect use of an optim
119. an read write in each user s directory I would hesitate to recommend this however since we have no idea what security holes this creates Debugging ApplixWare ODBC Connections One good tool for debugging connection problems uses the Unix system utility strace Debugging with strace 1 Start applixware 2 Start an strace on the axnet process For example if ps aucx grep ax shows cary 10432 0 0 2 6 1740 392 S Oct 9 0 00 axnet cary 27883 0 9 31 0 12692 4596 S 10 24 0 04 axmain Then run strace f s 1024 p 10432 3 Check the strace output Note from Cary Many of the error messages from ApplixWare go to stderr but I m not sure where stderr is sent so strace is the way to find out 181 Chapter 20 ODBC Interface For example after getting a Cannot launch gateway on server I ran strace on axnet and got pid 27947 open usr lib libodbc so O RDONLY 1 ENOENT No such file or directory pid 27947 open lib libodbc so O RDONLY 1 ENOENT No such file or directory pid 27947 write 2 usr2 applix axdata elfodbc can t load library libodbc so n 61 1 EIO I O error So what is happening is that applix elfodbc is searching for libodbc so but it can t find it That is why axnet cnf needed to be changed Running the ApplixWare Demo In order to go through the ApplixWare Data Tutorial you need to create the sample tables that the Tutorial refers to The ELF Macro used to create t
120. and use the rest of the PostgreSQL Why Embedded SQL Embedded SQL has some small advantages over other ways to handle SQL queries It takes care of all the tedious moving of information to and from variables in your C program Many RDBMS packages support this embedded language There is an ANSI standard describing how the embedded language should work ecpg was designed to meet this standard as much as possible So it is possible to port programs with embedded SQL written for other RDBMS packages to Postgres and thus promoting the spirit of free software The Concept You write your program in C with some special SQL things For declaring variables that can be used in SQL statements you need to put them in a special declare section You use a special syntax for the SQL queries Before compiling you run the file through the embedded SQL C preprocessor and it converts the SQL statements you used to function calls with the variables used as arguments Both variables that are used as input to the SQL statements and variables that will contain the result are passed Then you compile and at link time you link with a special library that contains the functions used These functions actually it is mostly one single function fetches the information from the arguments performs the SQL query using the ordinary interface libpq and puts back the result in the arguments dedicated for output Then you run your program and when the control arrives to
121. ase of logfunc2 the Postgres main parser does not know what type now should become and therefor it returns a datatype of text containing the string now During the assignment to the local variable curtime the PL pgSQL interpreter casts this string to the datetime type by calling the text out and datetime in functions for the conversion This type checking done by the Postgres main parser got implemented after PL pgSQL was nearly done It is a difference between 6 3 and 6 4 and affects all functions using the prepared plan feature of the SPI manager Using a local variable in the above manner is currently the only way in PL pgSQL to get those values interpreted correctly 65 Chapter 11 Procedural Languages If record fields are used in expressions or statements the data types of fields should not change between calls of one and the same expression Keep this in mind when writing trigger procedures that handle events for more than one table Statements Anything not understood by the PL pgSQL parser as specified below will be put into a query and sent down to the database engine to execute The resulting query should not return any data Assignment An assignment of a value to a variable or row record field is written as identifier expression If the expressions result data type doesn t match the variables data type or the variable has a size precision that is known as for char 20 the result value will be implicitly
122. asswd char fe getauthname char errorMessage 129 Chapter 16 libpq fe setauthsvc Specifies that libpq should use authentication service name rather than its compiled in default This value is typically taken from a command line switch void fe setauthsvc char name char errorMessage Any error messages from the authentication attempts are returned in the errorMessage argument Environment Variables The following environment variables can be used to select default connection parameter values which will be used by PQconnectdb or PQsetdbLogin if no value is directly specified by the calling code These are useful to avoid hard coding database names into simple application programs PGHOST sets the default server name If a non zero length string is specified TCP IP communication is used Without a host name libpq will connect using a local Unix domain socket PGPORT sets the default port or local Unix domain socket file extension for communicating with the Postgres backend PGDATABASE sets the default Postgres database name PGUSER sets the username used to connect to the database and for authentication PGPASSWORD sets the password used if the backend demands password authentication PGREALM sets the Kerberos realm to use with Postgres if it is different from the local realm If PGREALM is set Postgres applications will attempt authentication with servers for this realm and use separate ticket files to avoid conflicts
123. aster signals the backend to abort processing of the current query The cancellation signal may or may not have any effect for example if it arrives after the backend has finished processing the query then it will have no effect If the cancellation is effective it results in the current command being terminated early with an error message The upshot of all this is that for reasons of both security and efficiency the frontend has no direct way to tell whether a cancel request has succeeded It must continue to wait for the backend to respond to the query Issuing a cancel simply improves the odds that the current query will finish soon and improves the odds that it will fail with an error message instead of succeeding Since the cancel request is sent to the postmaster and not across the regular frontend backend communication link it is possible for the cancel request to be issued by any process not just the frontend whose query is to be canceled This may have some benefits of flexibility in building multiple process applications It also introduces a security risk in that unauthorized persons might try to cancel queries The security risk is addressed by requiring a dynamically generated secret key to be supplied in cancel requests Termination The normal graceful termination procedure is that the frontend sends a Terminate message and immediately closes the connection On receipt of the message the backend immediately closes the
124. atement is broken into when it is in the querytree structure The parts of a querytree are the commandtype This is a simple value telling which command SELECT INSERT UPDATE DELETE produced the parsetree 28 Chapter 8 The Postgres Rule System the rangetable The rangtable is a list of relations that are used in the query In a SELECT statement that are the relations given after the FROM keyword Every rangetable entry identifies a table or view and tells by which name it is called in the other parts of the query In the querytree the rangetable entries are referenced by index rather than by name so here it doesn t matter if there are duplicate names as it would in an SQL statement This can happen after the rangetables of rules have been merged in The examples in this document will not have this situation the resultrelation This is an index into the rangetable that identifies the relation where the results of the query go SELECT queries normally don t have a result relation The special case of a SELECT INTO is mostly identical toa CREATE TABLE INSERT SELECT sequence and is not discussed separately here On INSERT UPDATE and DELETE queries the resultrelation is the table or view where the changes take effect the targetlist The targetlist is a list of expressions that define the result of the query In the case of a SELECT the expressions are what builds the final output of the query They are the expre
125. ation by direct access to a constant string array inside libpg extern const char const pgresStatus However using the function is recommended instead since it is more portable and will not fail on out of range values PQresultErrorMessage returns the error message associated with the query or an empty string if there was no error const char PQresultErrorMessage PGresult res Immediately following a PQexec or PQgetResult call PQerrorMessage on the connection will return the same string as PQresultErrorMessage on the result However a PGresult will retain its error message until destroyed whereas the connection s error message will change when subsequent operations are done Use PQresultErrorMessage when you want to know the status associated with a particular PGresult use PQerrorMessage when you want to know the status from the latest operation on the connection PQntuples Returns the number of tuples instances in the query result int PQntuples PGresult res PQnfields Returns the number of fields attributes in each tuple of the query result int POnfields PGresult res PQbinaryTuples Returns 1 if the PGresult contains binary tuple data O if it contains ASCH data int PQbinaryTuples PGresult res Currently binary tuple data can only be returned by a query that extracts data from a BINARY cursor PQfname Returns the field attribute name associated with the given field index Field indices start at
126. ations to provide maintainance support updates enhancements or modifications UNIX is a trademark of X Open Ltd Sun4 SPARC SunOS and Solaris are trademarks of Sun Microsystems Inc DEC DECstation Alpha AXP and ULTRIX are trademarks of Digital Equipment Corp PA RISC and HP UX are trademarks of Hewlett Packard Co OSF 1 is a trademark of the Open Software Foundation Chapter 2 Architecture Postgres Architectural Concepts Before we continue you should understand the basic Postgres system architecture Understanding how the parts of Postgres interact will make the next chapter somewhat clearer In database jargon Postgres uses a simple process per user client server model A Postgres session consists of the following cooperating UNIX processes programs A supervisory daemon process postmaster the user s frontend application e g the psql program and the one or more backend database servers the postgres process itself A single postmaster manages a given collection of databases on a single host Such a collection of databases is called an installation or site Frontend applications that wish to access a given database within an installation make calls to the library The library sends user requests over the network to the postmaster How a connection is established a which in turn starts a new backend server process How a connection is established b and connects the frontend process to the new server How a connec
127. be composite types complete table rows In that case the 63 Chapter 11 Procedural Languages corresponding identifier n will be a rowtype but it must be aliased using the ALIAS command described below Only the user attributes of a table row are accessible in the row no Oid or other system attributes hence the row could be from a view and view rows don t have useful system attributes The fields of the rowtype inherit the tables fieldsizes or precision for char etc data types name RECORD Records are similar to rowtypes but they have no predefined structure They are used in selections and FOR loops to hold one actual database row from a SELECT operation One and the same record can be used in different selections Accessing a record or an attempt to assign a value to a record field when there is no actual row in it results in a runtime error The NEW and OLD rows in a trigger are given to the procedure as records This is necessary because in Postgres one and the same trigger procedure can handle trigger events for different tables name ALIAS FOR n For better readability of the code it is possible to define an alias for a positional parameter to a function This aliasing is required for composite types given as arguments to a function The dot notation 1 salary as in SQL functions is not allowed in PL pgSQL RENAME oldname TO newname Change the name of a variable record or row This is useful if NEW or OLD
128. built into Postgres to perform this addition CREATE AGGREGATE complex sum sfuncl complex_add basetype complex stypel complex initcondl 0 0 SELECT complex sum a FROM test complex complex_sum 34 53 9 If we define only sfunc2 we are specifying an aggregate that computes a running function that is independent of the attribute values from each instance Count is the most common example of this kind of aggregate Count starts at zero and adds one to its running total for each instance ignoring the instance value Here we use the built in int4inc routine to do the work for us This routine increments adds one to its argument CREATE AGGREGATE my_count sfunc2 int4inc add one basetype int4 stype2 int4 initcond2 0 Js SELECT my count as emp count from EMP emp_count 5 Average is an example of an aggregate that requires both a function to compute the running sum and a function to compute the running count When all of the instances have been processed the final answer for the aggregate is the running sum divided by the running count We use the int4pl and int4inc routines we used before as well as the Postgres integer division routine int4div to compute the division of the sum by the count CREATE AGGREGATE my_average sfuncl int4pl sum basetype int4 styp
129. c select x T 1 row vac gt insert NOTICE trigf NOTICE trigf from ttest into ttest select x 2 from ttest fired before there are 1 tuples in fired after there are 2 tuples in AAA AU AAA ttest ttest remember what we said about visibility INSERT 167794 1 vac select x HE 2 2 rows vac gt update NOTICE trigf UPDATE 0 vac gt update NOTICE trigf NOTICE trigf UPDATE 1 vac gt select x T 4 2 rows vac gt delete NOTICE trigf NOTICE trigf from ttest ttest set x null where x 2 fired before there are 2 tuples in ttest ttest set x 4 where x 2 fired before there are 2 tuples in ttest fired after there are 2 tuples in ttest from ttest from ttest fired before fired after there are 2 tuples in ttest there are 1 tuples in ttest 83 Chapter 13 Triggers NOTICE trigf fired before there are 1 tuples in ttest NOTICE trigf fired after there are 0 tuples in ttest remember what we said about visibility DELETE 2 vac gt select from ttest x 0 rows 84 Chapter 14 Server Programming Interface The Server Programming Interface SPI gives users the ability to run SQL queries inside user defined C functions The available Procedural Languages PL give an alternate means to access these capabilities In fact SPI is just a set of native interface functions to simplify access to the Parser Planner O
130. cal name is GD Trigger Procedures in PL Tcl Trigger procedures are defined in Postgres as functions without arguments and a return type of opaque And so are they in the PL Tcl language The informations from the trigger manager are given to the procedure body in the following variables STG name The name of the trigger from the CREATE TRIGGER statement TG_relid The object ID of the table that caused the trigger procedure to be invoked TG_relatts A Tcl list of the tables field names prefixed with an empty list element So looking up an element name in the list with the lsearch Tcl command returns the same positive number starting from 1 as the fields are numbered in the pg_attribute system catalog 12 Chapter 11 Procedural Languages TG_when The string BEFORE or AFTER depending on the event of the trigger call TG_level The string ROW or STATEMENT depending on the event of the trigger call STG_op The string INSERT UPDATE or DELETE depending on the event of the trigger call SNEW An array containing the values of the new table row on INSERT UPDATE actions or empty on DELETE SOLD An array containing the values of the old table row on UPDATE DELETE actions or empty on INSERT GD The global status data array as described above Sargs A Tel list of the arguments to the procedure as given in the CREATE TRIGGER statement The arguments are also accessible as 1 n in the procedure body The return
131. cat files nil End The Postgres distribution includes a parsed DTD definitions file reference ced You may find that 236 Appendix DG2 Documentation When using emacs psgml a comfortable way of working with these separate files of book parts is to insert a proper DOCTYPE declaration while you re editing them If you are working on this source for instance it s an appendix chapter so you would specify the document as an appendix instance of a DocBook document by making the first line look like this doctype appendix PUBLIC Davenport DTD DocBook V3 0 EN This means that anything and everything that reads SGML will get it right and I can verify the document with nsgmls s docguide sgml Building Documentation GNU make is used to build documentation from the DocBook sources There are a few environment definitions which may need to be set or modified for your installation The Makefile looks for doc src Makefile and implicitly for doc src Makefile custom to obtain environment information On my system the src Makefile custom looks like H Makefile custom Thomas Lockhart 1998 03 01 POSTGRESDIR opt postgres current CFLAGS4 m486 YFLAGS v documentation HSTYLE home tgl SGML db107 d docbook html PSTYLE home tgl SGML db107 d docbook print where HSTYLE and PSTYLE determine the path to docbook dsl for HTML and hardcopy print stylesheets respectively These stylesheet file names are fo
132. cations will attempt authentication with servers for this realm and use separate ticket files to avoid conflicts with local ticket files This environment variable is only used if Kerberos authentication is selected by the backend PGOPTIONS sets additional runtime options for the Postgres backend PGTTY sets the file or tty on which debugging messages from the backend server are displayed The following environment variables can be used to specify user level default behavior for every Postgres session PGDATESTYLE sets the default style of date time representation PGTZ sets the default time zone The following environment variables can be used to specify default internal behavior for every Postgres session PGGEQO sets the default mode for the genetic optimizer PGRPLANS sets the default mode to allow or disable right sided plans in the optimizer PGCOSTHEAP sets the default cost for heap searches for the optimizer PGCOSTINDEX sets the default cost for indexed searches for the optimizer PGQUERY_LIMIT sets the maximum number of rows returned by a query Refer to the SET SQL command for information on correct values for these environment variables libpq Classes Connection Class PgConnection The connection class makes the actual connection to the database and is inherited by all of the access classes Database Class PgDatabase The database class provides C objects that have a connection to a backend server To create
133. causes considerable confusion among users As a result we only support large objects as data stored within the Postgres database in PostgreSQL Even though it is slower to access it provides stricter data integrity For historical reasons this storage scheme is referred to as Inversion large objects We will use Inversion and large objects interchangeably to mean the same thing in this section Inversion Large Objects The Inversion large object implementation breaks large objects up into chunks and stores the chunks in tuples in the database A B tree index guarantees fast searches for the correct chunk number when doing random access reads and writes Large Object Interfaces The facilities Postgres provides to access large objects both in the backend as part of user defined functions or the front end as part of an application using the interface are described below For users familiar with Postgres 4 2 PostgreSQL has a new set of functions providing a more coherent interface The interface is the same for dynamically loaded C functions as well as for XXX LOST TEXT WHAT SHOULD GO HERE The Postgres large object interface is modeled after the UNIX file system interface with analogues of open 2 read 2 write 2 Iseek 2 etc User functions call these routines to retrieve only the data of interest from a large object For example if a large object type called mugshot existed that stored photographs of faces then a function called b
134. ck 80 cm 80 S12 6 black 100 cm 100 s17 6 brown 60 cm 60 s14 8 black 40 inch 101 6 s13 10 black 35 inch 88 9 s18 21 brown 40 inch 101 6 s15 4 brown 1 m 100 sl6 20 brown 0 9 m 90 8 rows al bundy SELECT FROM shoelace log Sl name sl avail log who log when SSP ES SS ec ce Soa Soe mend S17 6 Al Tue Oct 20 19 14 45 1998 MET DST s13 10 Al Tue Oct 20 19 25 16 1998 MET DST sl6 20 A1 Tue Oct 20 19 25 16 1998 MET DST s18 21 A1 Tue Oct 20 19 25 16 1998 MET DST 4 rows It s a long way from the one INSERT SELECT to these results And it s description will be the last in this document but not the last example First there was the parsers output INSERT INTO shoelace_ok SELECT shoelace arrive arr name shoelace arrive arr quant FROM shoelace arrive shoelace arrive shoelace ok shoelace ok Now the first rule shoelace ok ins is applied and turns it into UPDATE shoelace SET Sl avail int4pl shoelace sl avail shoelace arrive arr quant FROM shoelace arrive shoelace arrive shoelace ok shoelace ok shoelace ok OLD shoelace ok NEW shoelace shoelace WHERE bpchareq shoelace sl name showlace arrive arr name and throws away the original INSERT on shoelace ok This rewritten query is passed to the rule system again and the second applied rule shoelace upd produced 44 Chapter 8 The Postgres Rule System UPDATE shoelace_data SET sl name shoelace sl name Sl avail int4pl shoelace sl avail shoe
135. cluded below SGML Authoring Tools The current Postgres documentation set was written using a plain text editor or emacs psgml see below with the content marked up using SGML DocBook tags SGML and DocBook do not suffer from an oversupply of open source authoring tools The most common toolset is the emacs xemacs editing package with the psgml feature extension On some systems e g RedHat Linux these tools are provided in a typical full installation emacs psgml emacs and xemacs have an SGML major mode When properly configured this will allow you to use emacs to insert tags and check markup consistancy Put the following in your emacs environment file for SGML mode psgml setq sgml catalog files usr lib sgml CATALOG setq sgml local catalogs usr lib sgml CATALOG autoload sgml mode psgml Major mode to edit SGML files t and add an entry in the same file for SGML into the existing definition for auto mode alist setq auto mode alist NN sgml sgml mode Each SGML source file has the following block at the end of the file Keep this comment at the end of the file Local variables mode sgml sgml omittag t sgml shorttag t sgml minimize attributes nil sgml always quote attributes t sgml indent step 1 sgml indent data t sgml parent document nil sgml default dtd file reference ced sgml exposed tags nil sgml local catalogs usr lib sgml catalog sgml local e
136. connection and terminates An ungraceful termination may occur due to software failure 1 e core dump at either end If either frontend or backend sees an unexpected closure of the connection it should clean up and terminate The frontend has the option of launching a new backend by recontacting the postmaster if it doesn t want to terminate itself Message Data Types This section describes the base data types used in messages Intn i An n bit integer in network byte order If i is specified it is the literal value Eg Int16 Int32 42 LimStringn s A character array of exactly n bytes interpreted as a 0 terminated string The 0 is omitted if there is insufficient room If s is specified it is the literal value Eg LimString32 LimString64 user String s A conventional C M0 terminated string with no length limitation A frontend should always read the full string even though it may have to discard characters if its buffers aren t big enough 208 Chapter 25 Frontend Backend Protocol Note Is 8193 bytes the largest allowed size If s is specified it is the literal value Eg String String user Byten c Exactly n bytes If c is specified it is the literal value Eg Byte Byte1 n Message Formats This section describes the detailed format of each message Each can be sent by either a frontend F a postmaster backend B or both F amp B AsciiRow B Bytel D Identi
137. ction is closed after sending this message NoticeResponse A warning message has been issued The frontend should display the message but continue listening for ReadyForQuery or ErrorResponse The ReadyForQuery message is the same one that the backend will issue after each query cycle Depending on the coding needs of the frontend it is reasonable to consider ReadyForQuery as starting a query cycle and then BackendKeyData indicates successful conclusion of the startup phase or to consider ReadyForQuery as ending the startup phase and each subsequent query cycle A Query cycle is initiated by the frontend sending a Query message to the backend The backend then sends one or more response messages depending on the contents of the query command string and finally a ReadyForQuery response message ReadyForQuery informs the frontend that it may safely send a new query or function call The possible response messages from the backend are CompletedResponse An SQL command completed normally CopyInResponse The backend is ready to copy data from the frontend to a relation The frontend should then send a CopyDataRows message The backend will then respond with a CompletedResponse message with a tag of COPY CopyOutResponse The backend is ready to copy data from a relation to the frontend It then sends a CopyDataRows message and then a CompletedResponse message with a tag of COPY CursorResponse The query was either an insert 1
138. d You can update in just a couple of minutes typically even over a modem speed line 5 You can save yourself some typing by making a file cvsrc in your home directory that contains cvs z3 update d P This supplies the z3 option to all cvs commands and the d and P options to cvs update Then you just have to say cvs update to update your files Caution Some older versions of CVS have a bug that causes all checked out files to be stored world writable in your directory If you see that this has happened you can do something like chmod R go w pgsql to set the permissions properly This bug is fixed as of CVS version 1 9 28 CVS can do a lot of other things such as fetching prior revisions of the Postgres sources rather than the latest development version For more info consult the manual that comes with CVS or see the online documentation at http www cyclic com 227 Appendix DG1 The CVS Repository Getting The Source Via CVSup An alternative to using anonymous CVS for retrieving the Postgres source tree is CVSup CVSup was developed by John Polstra mailto jdp polstra com to distribute CVS repositories and other file trees for the FreeBSD project http www freebsd org A major advantage to using CVSup is that it can reliably replicate the entire CVS repository on your local system allowing fast local access to cvs operations such as log and diff Other advantages include fast synchronization t
139. d by the user in the manner to be described below Chapter 3 Extending SOL An Overview About the Postgres System Catalogs Having introduced the basic extensibility concepts we can now take a look at how the catalogs are actually laid out You can skip this section for now but some later sections will be incomprehensible without the information given here so mark this page for later reference All system catalogs have names that begin with pg The following classes contain information that may be useful to the end user There are many other system catalogs but there should rarely be a reason to query them directly Table 3 1 Postgres System Catalogs Chapter 3 Extending SOL An Overview Figure 3 1 The major Postgres system catalogs pg index amopid amopclaid pg attribute attnum atttypid XN typinput typoutput typreceive typsend ypes 8 prolang amid amopclaid ambeginscan KEY i amrescan negate DEPENDENT amendscan rlsortop oreign ke ammarkpos rrsortop amrestrpos REFERS TO INDEPENDENT ambuild key if am mandatory Knon ke O indicates these key values are alternate primary keys i e this class is generally identified by oid but may be identified by the non oid primary key in other contexts The Reference Manual gives a more detailed explanation of these catalogs and their attributes However The major Postgres system catalo
140. d doesn t process SI cache for a long period When a backend detects the SI table full at 70 it simply sends a signal to the postmaster which will wake up all idle backends and make them flush the cache 217 The typical use of signals by programmers could be the following stop postgres kill TERM postmaster pid kill all the backends kill QUIT postmaster pid kill only the postmaster kill INT postmaster pid change pg options cat new pg options DATA DIR pg options kill HUP postmaster pid H change pg options only for a backend cat new pg options DATA DIR pg options kill HUP backend pid cat old pg options DATA DIR pg options Chapter 26 Postgres Signals 218 Chapter 27 gcc Default Optimizations Note Contributed by Brian Gallew mailto geek cmu edu Configuring gcc to use certain flags by default is a simple matter of editing the usr local lib gcc lib platform version specs file The format of this file pretty simple The file 1s broken into sections each of which is three lines long The first line is section_name e g asm The second line is a list of flags and the third line is blank The easiest change to make is to append the desired default flags to the list in the appropriate section As an example let s suppose that I have linux running on a 486 with gcc 2 7 2 installed in the default location In the file usr local lib gcc lib 1486 linux 2 7 2 specs 13 lines down I
141. d issue the command UPDATE shoelace data SET sl avail 0 WHERE sl color black four rows in fact get updated sl1 s12 s13 and s14 But s13 already has sl avail 0 This time the original parsetrees qualification is different and that results in the extra parsetree INSERT INTO shoelace log SELECT shoelace data sl name 0 getpgusername now FROM shoelace data WHERE 0 shoelace data sl avail AND shoelace data sl color black This parsetree will surely insert three new log entries And that s absolutely correct It is important that the original parsetree is executed last The Postgres traffic cop does a command counter increment between the execution of the two parsetrees so the second one can see changes made by the first If the UPDATE would have been executed first all the rows are already set to zero so the logging INSERT would not find any row where 0 shoelace data sl avail Cooperation with Views A simple way to protect view relations from the mentioned possibility that someone can INSERT UPDATE and DELETE invisible data on them is to let those parsetrees get thrown away We create the rules CREATE RULE shoe ins protect AS ON INSERT TO shoe DO INSTEAD NOTHING CREATE RULE shoe upd protect AS ON UPDATE TO shoe DO INSTEAD NOTHING CREATE RULE shoe del protect AS ON DELETE TO shoe DO INSTEAD NOTHING 42 Chapter 8 The Postgres Rule System If Al now tries to do any of these operations on
142. d lobjld int start int len exportFile int lobj fd char buf int nbytes int nwritten int i lobj fd lo open conn lobjId INV READ if 1obj fd lt 0 fprintf stderr can t open large object d n lobjId lo lseek conn lobj_ fd start SEEK SET buf malloc len 1 for i 0 i lt len i buf i SXF buf i nwritten 0 while len nwritten gt 0 nbytes lo write conn lobj fd buf nwritten len nwritten nwritten nbytes fprintf stderr Mn lo close conn lobj_fd export large object lobjOid to file out filename void exportFile PGconn conn Oid lobjId char filename int lobj fd char buf BUFSIZE int nbytes tmp int fd create an inversion object X lobj fd lo open conn lobjId INV READ if l1obj fd lt 0 fprintf stderr can t open large object d n lobjId 114 Chapter 15 Large Objects open the file to be written to fd open filename O CREAT O WRONLY 0666 if fd lt 0 error fprintf stderr can t open unix file s n filename read in from the Unix file and write to the inv file while nbytes lo read conn lobj fd buf BUFSIZE gt 0 tmp write fd buf nbytes if tmp lt nbytes fprintf stderr error while writing s n filename y void lo close conn lobj_fd void close fd return void exit nicely PGconn conn int P
143. d to just SP the SGML parser kit that Jade is built upon We suggest that you don t do that though since there is more that you need to change than what is in Makefile jade so you d have to edit one of them anyway Go through the Makefile reading James instructions and editing as needed There are various variables that need to be set Here is a collected summary of the most important ones with typical values prefix usr local XDEFINES DSGML CATALOG FILES DEFAULT usr local share sgml catalog XLIBS lm RANLIB ranlib srcdir oe XLIBDIRS grove spgrove style XPROGDIRS jade Note the specification of where to find the default catalog of SGML support files you may want to change that to something more suitable for your own installation If your system doesn t need the above settings for the math library and the ranlib command leave them as they are in the Makefile Type make to build Jade and the various SP tools Once the software is built make install will do the obvious Installing the DocBook DTD Kit Installing the DocBook DTD Kit 1 You ll want to place the files that make up the DocBook DTD kit in the directory you built Jade to expect them in which if you followed our suggestion above is usr local share sgml In addition to the actual DocBook files you ll need to have a catalog file in place for the mapping of document type specifications and external entity references to actual files in
144. d until it reaches the memory limit is to create tables and then setup the view rules by hand with CREATE RULE in such a way that one selects from the other that selects from the one This could never happen if CREATE VIEW is used because on the first CREATE VIEW the second relation does not exist and thus the first view cannot select from the second View Rules in Non SELECT Statements Two details of the parsetree aren t touched in the description of view rules above These are the commandtype and the resultrelation In fact view rules don t need these informations There are only a few differences between a parsetree for a SELECT and one for any other command Obviously they have another commandtype and this time the resultrelation points to the rangetable entry where the result should go Anything else is absolutely the same So having two tables t1 and t2 with attributes a and b the parsetrees for the two statements SELECT t2 b FROM t1 t2 WHERE tl a t2 a 35 Chapter 8 The Postgres Rule System UPDATE t1 SET b t2 b WHERE tl a t2 a are nearly identical The rangetables contain entries for the tables tl and t2 The targetlists contain one variable that points to attribute b of the rangetable entry for table t2 The qualification expressions compare the attributes a of both ranges for equality The consequence is that both parsetrees result in similar execution plans They are both joins over the two tables For the UP
145. dStr getTuple tupleNumber returns the fields of the indicated tuple in a list Tuple numbers start at zero tupleArray tupleNumber arrayName stores the fields of the tuple in array arrayName indexed by field names Tuple numbers start at zero attributes returns a list of the names of the tuple attributes lAttributes returns a list of sublists name ftype fsize for each tuple attribute clear clear the result query object Outputs The result depends on the selected option as described above Description pg_result returns information about a query result created by a prior pg_exec You can keep a query result around for as long as you need it but when you are done with it be sure to free it by executing pg_result clear Otherwise you have a memory leak and Pgtcl will eventually start complaining that you ve created too many query result objects pg_select Name pg_select loop over the result of a SELECT statement Synopsis pg_select dbHandle queryString 153 arrayVar queryProcedure Inputs dbHandle Specifies a valid database handle queryString Specifies a valid SQL select query arrayVar Array variable for tuples returned queryProcedure Procedure run on each tuple found Outputs resultHandle the return result is either an error message or a handle for a query result Description Chapter 18 pgtcl pg_select submits a SELECT query to the Postgres backend a
146. ded to process the documentation One is installation from RPMs on Linux the other is a general installation from original distributions of the individual tools Both will be described below We understand that there are some other packaged distributions for these tools FreeBSD seems to have them available Please report package status to the docs mailing list and we will include that information here 238 Appendix DG2 Documentation RPM installation on Linux Install RPMs ftp ftp cygnus com pub home rosalia for Jade and related packages Manual installation of tools This is a brief run through of the process of obtaining and installing the software you ll need to edit DocBook source with Emacs and process it with Norman Walsh s DSSSL style sheets to create HTML and RTF These instructions do not cover new jade DocBook support in the sgml tools http www sgmltools org package The authors have not tried this package since it adopted DocBook but it is almost certainly a good candidate for use Prerequisites What you need A working installation of GCC 2 7 2 A working installation of Emacs 19 19 or later An unzip program for UNIX to unpack things What you must fetch James Clark s Jade ftp ftp jclark com pub jade version 1 1 in file jadel l zip was current at the time of writing DocBook version 3 0 http www ora com davenport docbook current docbk30 zip Norman Walsh s Modular Stylesheets http nwalsh com
147. def struct int4 length char data 1 text Obviously the data field is not long enough to hold all possible strings it s impossible to declare such a structure in C When manipulating variable length types we must be careful to allocate the correct amount of memory and initialize the length field For example if we wanted to store 40 bytes in a text structure we might use a code fragment like this include postgres h char buffer 40 our source data text destination text palloc VARHDRSZ 40 destination gt length VARHDRSZ 40 memmove destination gt data buffer 40 Now that we ve gone over all of the possible structures for base types we can show some examples of real functions Suppose funcs c look like Hinclude lt string h gt Hinclude postgres h By Value int add_one int arg return arg 1 By Reference Fixed Length Point makepoint Point pointx Point pointy Point new point Point palloc sizeof Point pointx gt x new point x pointy y new point gt y 14 Chapter 4 Extending SOL Functions return new point By Reference Variable Length text copytext text t VARSIZE is the total size of the struct in bytes text new t text palloc VARSIZE t memset new_t 0 VARSIZE t VARSIZE new t VARSIZE t VARDATA is a pointer to the data region of the struct memcpy void
148. depending on what it is The EXEC SQL statement can be one of these Declare sections Declare sections begins with exec sql begin declare section and ends with exec sql end declare section In the section only variable declarations are allowed Every variable declare within this section is also entered in a list of variables indexed on their name together with the corresponding type In particular the definition of a structure or union also has to be listed inside a declare section Otherwise ecpg cannot handle these types since it simply does not know the definition The declaration is echoed to the file to make the variable a normal C variable also The special types VARCHAR and VARCHAR are converted into a named struct for every variable A declaration like VARCHAR var 180 is converted into struct varchar var int len char arr 180 var Include statements An include statement looks like exec sql include filename Not that this is NOT the same as include lt filename h gt 170 Chapter 19 ecpg Embedded SQL in C Instead the file specified is parsed by ecpg itself So the contents of the specified file is included in the resulting C code This way you are able to specify EXEC SQL commands in an include file Connect statement A connect statement looks like exec sql connect to connection target It creates a connection to the specified database The connection target can be specified in the followi
149. detail of Postgres enters the stage At this moment table rows aren t overwritten and this is why ABORT TRANSACTION is fast In an UPDATE the new result row is inserted into the table after stripping ctid and in the tuple header of the row that ctid pointed to the cmax and xmax entries are set to the current command counter and current transaction ID Thus the old row is hidden and after the transaction commited the vacuum cleaner can really move it out Knowing that all we can simply apply view rules in absolutely the same way to any command There is no difference The Power of Views in Postgres The above demonstrates how the rule system incorporates view definitions into the original parsetree In the second example a simple SELECT from one view created a final parsetree that is a join of 4 tables unit is used twice with different names 36 Chapter 8 The Postgres Rule System Benefits The benefit of implementing views with the rule system is that the optimizer has all the information about which tables have to be scanned plus the relationships between these tables plus the restrictive qualifications from the views plus the qualifications from the original query in one single parsetree And this is still the situation when the original query is already a join over views Now the optimizer has to decide which is the best path to execute the query The more information the optimizer has the better this decision can be And the rule s
150. e Notifies to see if any notification data is currently available from the backend PgDatabase Notifies returns the notification from a list of unhandled notifications from the backend The function eturns NULL if there is no pending notifications from the backend PgDatabase Notifies behaves like the popping of a stack Once a notification is returned from PgDatabase Notifies it is considered handled and will be removed from the list of notifications PgDatabase Notifies retrieves pending notifications from the server PGnotify PgDatabase Notifies The second sample program gives an example of the use of asynchronous notification Functions Associated with the COPY Command The copy command in Postgres has options to read from or write to the network connection used by libpq Therefore functions are necessary to access this network connection directly so applications may take full advantage of this capability PgDatabase GetLine reads a newline terminated line of characters transmitted by the backend server into a buffer string of size length int PgDatabase GetLine char string int length Like the Unix system routine fgets 3 this routine copies up to length 1 characters into string Itis like gets 3 however in that it converts the terminating newline into a null character 144 Chapter 17 libpg C Binding PgDatabase GetLine returns EOF at end of file 0 if the entire line has been read and 1 if the buffer
151. e database has no builtin knowlege how to interpret the functions source text Instead the calls are passed into a handler that knows the details of the language The handler itself is a special programming language function compiled into a shared object and loaded on demand Installing Procedural Languages Procedural Language Installation A procedural language is installed in the database in three steps 1 The shared object for the language handler must be compiled and installed By default the handler for PL pgSQL is built and installed into the database library directory If Tcl Tk support is configured in the handler for PL Tcl is also built and installed in the same location Writing a handler for a new procedural language PL is outside the scope of this manual 2 The handler must be declared with the command CREATE FUNCTION handler function name RETURNS OPAQUE AS path to shared object LANGUAGE C The special return type of OPAQUE tells the database that this function does not return one of the defined base or composite types and is not directly usable in SQL statements 3 The PL must be declared with the command CREATE TRUSTED PROCEDURAL LANGUAGE language name HANDLER handler function name LANCOMPILER description The optional keyword TRUSTED tells if ordinary database users that have no superuser privileges can use this language to create functions and trigger procedures Since PL functions are executed
152. e a qualified rule that rewrites a query to NOTHING if the value of a column does not appear in another table But then the data is silently thrown away and that s not a good idea If checks for valid values are required and in the case of an invalid value an error message should be generated it must be done by a trigger for now On the other hand a trigger that is fired on INSERT on a view can do the same as a rule put the data somewhere else and suppress the insert in the view But it cannot do the same thing on UPDATE or DELETE because there is no real data in the view relation that could be scanned and thus the trigger would never get called Only a rule will help For the things that can be implemented by both it depends on the usage of the database which is the best A trigger is fired for any row affected once A rule manipulates the parsetree or generates an additional one So if many rows are affected in one statement a rule issuing one extra query would usually do a better job than a trigger that is called for any single row and must execute his operations this many times For example There are two tables CREATE TABLE computer hostname text indexed manufacturer text indexed CREATE TABLE software software text indexed hostname text indexed De Both tables have many thousands of rows and the index on hostname is unique The hostname column contains the full qualified domain name of the computer The r
153. e inside Large Objects The types discussed to this point are all small objects that is they are smaller than 8KB in size Note 1024 longwords 8192 bytes In fact the type must be considerably smaller than 8192 bytes since the Postgres tuple and page overhead must also fit into this 8KB limitation The actual value that fits depends on the machine architecture If you require a larger type for something like a document retrieval system or for storing bitmaps you will need to use the Postgres large object interface 19 Chapter 6 Extending SQL Operators Postgres supports left unary right unary and binary operators Operators can be overloaded that is the same operator name can be used for different operators that have different numbers and types of arguments If there is an ambiguous situation and the system cannot determine the correct operator to use it will return an error You may have to typecast the left and or right operands to help it understand which operator you meant to use Every operator is syntactic sugar for a call to an underlying function that does the real work so you must first create the underlying function before you can create the operator However an operator is not merely syntactic sugar because it carries additional information that helps the query planner optimize queries that use the operator Much of this chapter will be devoted to explaining that additional information Here is an ex
154. e usual CREATE FUNCTION command as a function with no arguments and a return type of OPAQUE There are some Postgres specific details in functions used as trigger procedures First they have some special variables created automatically in the toplevel blocks declaration section They are NEW Datatype RECORD variable holding the new database row on INSERT UPDATE operations on ROW level triggers OLD Datatype RECORD variable holding the old database row on UPDATE DELETE operations on ROW level triggers TG_NAME Datatype name variable that contains the name of the trigger actually fired TG_WHEN Datatype text a string of either BEFORE or AFTER depending on the triggers definition TG_LEVEL Datatype text a string of either ROW or STATEMENT depending on the triggers definition 68 Chapter 11 Procedural Languages TG_OP Datatype text a string of INSERT UPDATE or DELETE telling for which operation the trigger is actually fired TG_RELID Datatype oid the object ID of the table that caused the trigger invocation TG_RELNAME Datatype name the name of the table that caused the trigger invocation TG_NARGS Datatype integer the number of arguments given to the trigger procedure in the CREATE TRIGGER statement TG_ARGVI Datatype array of text the arguments from the CREATE TRIGGER statement The index counts from 0 and can be given as an expression Invalid indices lt 0 or gt
155. eard could be declared on mugshot data Beard could look at the lower third of a photograph and determine the color of the beard that appeared there if any The entire large object value need not be buffered or even examined by the beard function Large objects may be accessed from dynamically loaded C functions or database client programs that link the library Postgres provides a set of routines that support opening reading writing closing and seeking on large objects 110 Chapter 15 Large Objects Creating a Large Object The routine Oid lo creat PGconn conn int mode creates a new large object The mode is a bitmask describing several different attributes of the new object The symbolic constants listed here are defined in PGROOT src backend libpq libpq fs h The access type read write or both is controlled by OR ing together the bits INV_READ and INV_WRITE If the large object should be archived that is if historical versions of it should be moved periodically to a special archive relation then the INV_ARCHIVE bit should be set The low order sixteen bits of mask are the storage manager number on which the large object should reside For sites other than Berkeley these bits should always be zero The commands below create an Inversion large object inv_oid lo_creat INV_READ INV_WRITE INV_ARCHIVE Importing a Large Object To import a UNIX file as a large object call Oid lo_import PGconn conn text filename
156. eed the required permissions for the tables views he names in his queries For example A user has a list of phone numbers where some of them are private the others are of interest for the secretary of the office He can construct the following CREATE TABLE phone data person text phone text private bool CREATE VIEW phone number AS SELECT person phone FROM phone data WHERE NOT private GRANT SELECT ON phone number TO secretary Nobody except him and the database superusers can access the phone_data table But due to the GRANT the secretary can SELECT from the phone_number view The rule system will rewrite the SELECT from phone_number into a SELECT from phone_data and add the qualification that only entries where private is false are wanted Since the user is the owner of phone_number the read access to phone_data is now checked against his permissions and the query is considered granted The check for accessing phone_number is still performed so nobody than the secretary can use it The permissions are checked rule by rule So the secretary is for now the only one who can see the public phone numbers But the secretary can setup another view and grant access to that to public Then anyone can see the phone_number data through the secretaries view What the secretary cannot do is to create a view that directly accesses phone_data actually he can but it will not work since every access aborts the transaction during the permission chec
157. el int4 sfunc2 int4inc count 26 Chapter 7 Extending SOL Aggregates stype2 int4 finalfunc int4div division initcondl 0 initcond2 0 SELECT my average salary as emp average FROM EMP emp_average 7 1640 27 Chapter 8 The Postgres Rule System Production rule systems are conceptually simple but there are many subtle points involved in actually using them Some of these points and the theoretical foundations of the Postgres rule system can be found in Stonebraker et al ACM 1990 Some other database systems define active database rules These are usually stored procedures and triggers and are implemented in Postgres as functions and triggers The query rewrite rule system the rule system from now on is totally different from stored procedures and triggers It modifies queries to take rules into consideration and then passes the modified query to the query optimizer for execution It is very powerful and can be used for many things such as query language procedures views and versions The power of this rule system is discussed in Ong and Goh 1990 as well as Stonebraker et al ACM 1990 What is a Querytree To understand how the rule system works it is necessary to know when it is invoked and what it s input and results are The rule system is located between the query parser and the optimizer It takes the output o
158. elation function etc referenced by the prepared plan is dropped during your session by your backend or another process then the results of SPI execp for this plan will be unpredictable 92 Chapter 14 Server Programming Interface SPI execp Name SPI exec Executes a passed plan Synopsis SPI execp plan values nulls tcount Inputs void plan Execution plan Datum values Actual parameter values char nulls Array describing what parameters get NULLs n indicates NULL allowed gt indicates NULL not allowed int tcount Number of tuples for which plan is to be executed Outputs int Returns the same value as SPI_exec as well as SPI ERROR ARGUMENT if plan is NULL or tcount lt 0 SPI ERROR PARAM if values is NULL and plan was prepared with some parameters 93 Chapter 14 Server Programming Interface SPI_tuptable initialized as in SPI_exec if successful SPI_processed initialized as in SPI_exec if successful Description SPI_execp stores a plan prepared by SPI_prepare in safe memory protected from freeing by SPI_finish or the transaction manager In the current version of Postgres there is no ability to store prepared plans in the system catalog and fetch them from there for execution This will be implemented in future versions As a work arround there is the ability to reuse prepared plans in the consequent invocations of your procedure in the current session Use SPI_execp to execute
159. en Ea nennt retener 230 DG2 Documentation cssesessrserssserscssrsersessrsersessrsessessssesersessesessessesensessesensesessesseseneesees 233 Documentation Roadmap epe e epe rein ee paid indi 233 The Documentation Project oooccnicnnccnnocnonnconoconoconcnnncnnocnonn nono nonn non no rn nono tnne tenente tenete 234 Documentation SOUTCES ciio a er dier repo er pete ETE Eo eo estre restet 234 Document Structure nete rtt ai 235 Styles and Conventions sessssssseseseeeeeeen eene enne neon rennen en eene enne 236 SGML Authoring Tools etre tete rtt tr epe eno Re eee 236 emacs psgml 5 sese tiep eme Ope eere totem piie den 236 Building Documentation eee ia tre tet beate epe nea 237 Hardcopy Generation for v6 5 ssessssssseeeeeeeeneeee nennen trennen nene 237 vi RTF Cleanup Procedure cc ches rotto de PEE RE era tp pon 238 TOOISEIS E M M 238 RPM installation on Linux eeeeeeseeeeeeeeeeene enne enne enne nennt tenen entente enne 239 Manual installation of tools esses eene enne nennen nennen 239 Prerequisites tarta aii 239 Installing Jade genesehes ome eo rE DEEE Se rahib EERON tete 239 Installing the DocBook DTD Kit seen 240 Installing Norman Walsh s DSSSL Style Sheets sesees 241 Installing PSGML euntes eomm o t a yen 241 Installing Jade OX eiii dtr d e e
160. ents can be specified separated by OR The relation name determines which table the event applies to The FOR EACH statement determines whether the trigger is fired for each affected row or before or after the entire statement has completed The procedure name is the C function called The args are passed to the function in the CurrentTriggerData structure The purpose of passing arguments to the function is to allow different triggers with similar requirements to call the same function Also function may be used for triggering different relations these functions are named as general trigger functions As example of using both features above there could be a general function that takes as its arguments two field names and puts the current user in one and the current timestamp in the other This allows triggers to be written on INSERT events to automatically track creation of records in a transaction table for example It could also be used as a last updated function if used in an UPDATE event Trigger functions return HeapTuple to the calling Executor This is ignored for triggers fired after an INSERT DELETE or UPDATE operation but it allows BEFORE triggers to return NULL to skip the operation for the current tuple and so the tuple will not be inserted updated deleted return a pointer to another tuple INSERT and UPDATE only 79 Chapter 13 Triggers which will be inserted as the new version of the updated tuple if
161. er but not including the size word if the field is variable length It is then the programmer s responsibility to cast and convert the data to the correct C type The pointer returned by PQgetvalue points to storage that is part of the PGresult structure One should not modify it and one must explicitly copy the value into other storage if it is to be used past the lifetime of the PGresult structure itself PQgetlength Returns the length of a field attribute in bytes Tuple and field indices start at 0 int PQgetlength PGresult res int tup num int field num This is the actual data length for the particular data value that is the size of the object pointed to by PQgetvalue Note that for ASCII represented values this size has little to do with the binary size reported by PQfsize PQgetisnull Tests a field for a NULL entry Tuple and field indices start at 0 int PQgetisnull PGresult res int tup num int field num This function returns 1 if the field contains a NULL 0 if it contains a non null value Note that PQgetvalue will return an empty string not a null pointer for a NULL field PQcmdStatus Returns the command status string from the SQL command that generated the PGresult char POcmdStatus PGresult res PQcmdTuples Returns the number of rows affected by the SQL command const char PQcmdTuples PGresult res If the SQL command that generated the PGresult was INSERT UPDATE or DELETE this returns a strin
162. er will probably be different and you should substitute the value you see for the value below We can add the new instance as follows INSERT INTO pg amproc amid amopclaid amproc amprocnum SELECT a oid b oid c oid 1 FROM pg am a pg opclass b pg proc c WHERE a amname btree AND b opcname complex abs ops AND c proname complex abs cmp Now we need to add a hashing strategy to allow the type to be indexed We do this by using another type in pg am but we reuse the sames ops INSERT INTO pg amop amopid amopclaid amopopr amopstrategy amopselect amopnpages 57 Chapter 9 Interfacing Extensions To Indices SELECT am oid opcl oid c opoid 1 hashsel regproc hashnpage regproc FROM pg am am pg opclass opcl complex abs ops tmp c WHERE amname hash AND opcname complex abs ops AND c oprname z In order to use this index in a where clause we need to modify the pg operator class as follows UPDATE pg operator SET oprrest eqsel regproc oprjoin eqjoinsel WHERE oprname AND oprleft oprright AND oprleft SELECT oid FROM pg type WHERE typname complex abs UPDATE pg operator SET oprrest neqsel regproc oprjoin neqjoinsel WHERE oprname AND oprleft oprright AND oprleft SELECT oid FROM pg type WHERE typname complex abs UPDATE pg operator SET oprrest neqsel regproc oprjoin negjoinsel WHERE oprname AND oprleft oprrigh
163. ers must be defined in backend utils misc trace c and backend include utils trace h For example suppose we want to add conditional trace messages and a tunable numeric parameter to the code in file foo c All we need to do is to add the constant TRACE_FOO and OPT_FOO_PARAM into backend include utils trace h file trace h enum pg option enum TRACE FOO trace foo functions OPT FOO PARAM foo tunable parameter NUM PG OPTIONS must be the last item of enum and a corresponding line in backend utils misc trace c file trace c static char opt names foo trace foo functions fooparam foo tunable parameter Options in the two files must be specified in exactly the same order In the foo source files we can now reference the new flags with file foo c Hinclude trace h define foo param pg options OPT FOO PARAM int foo function int x int y TPRINTF TRACE FOO entering foo function foo param d foo param if foo param gt 10 do more foo x y 197 Chapter 23 pg_options Existing files using private trace flags can be changed by simply adding the following code include trace h int my_own flag 0 removed define my own flag pg options OPT MY OWN FLAG All pg options are initialized to zero at backend startup If we need a different default value we must add some initialization code at the beginning of PostgresMain Now we can set the foo
164. es fetch instances from the pg database the system catalog of databases res PQexec conn DECLARE mycursor BINARY CURSOR FOR select from test1 if res PoQresultStatus res PGRES COMMAND OK fprintf stderr DECLARE CURSOR command failed n Poclear res exit nicely conn Poclear res res PQexec conn FETCH ALL in mycursor if res POresultStatus res PGRES TUPLES OK fprintf stderr FETCH ALL command didn t return tuples properly n Poclear res exit nicely conn i fnum PQfnumber res i d fnum POfnumber res d p fnum POfnumber res p for i 0 i lt 3 i printf type d d size d d n i PQftype res i i POfsize res 1 p i 0 i lt PQntuples res i int ival float dval int plen POLYGON pval we hard wire this to the 3 fields we know about ival int POgetvalue res i i fnum dval float POgetvalue res i d fnum plen POgetlength res i p fnum plen doesn t include the length field so need to increment by VARHDSZ pval POLYGON malloc plen VARHDRSZ pval gt size plen memmove char amp pval snpts POgetvalue res i p fnum plen printf tuple d got n i printf i d bytes d n PQgetlength res i i_fnum ival printf d d bytes f n PQgetlength res i d_fnum dval printf p d bytes d points tboundbox
165. es the qualification given by the tree attached to qpqual it is handed back otherwise the next tuple is fetched until the qualification is satisfied If the last tuple of the relation has been processed a NULL pointer is returned After a tuple has been handed back by the lefttree of the MergeJoin the righttree is processed in the same way If both tuples are present the executor processes the MergeJoin node Whenever a new tuple from one of the subplans is needed a recursive call to the executor is performed to obtain it If a joined tuple could be created it is handed back and one complete processing of the plan tree has finished Now the described steps are performed once for every tuple until a NULL pointer is returned for the processing of the MergeJoin node indicating that we are finished 196 Chapter 23 pg_options Note Contributed by Massimo Dal Zotto mailto dz Wcs unitn it The optional file data pg_options contains runtime options used by the backend to control trace messages and other backend tunable parameters What makes this file interesting is the fact that 1t is re read by a backend when it receives a SIGHUP signal making thus possible to change run time options on the fly without needing to restart Postgres The options specified in this file may be debugging flags used by the trace package backend utils misc trace c or numeric parameters which can be used by the backend to control its behaviour New options and paramet
166. ess connected to exactly one server process As we don t know per se how many connections will be made we have to use a master process that spawns a new server process every time a connection is requested This master process is called postmaster and listens at a specified TCP IP port for incoming connections Whenever a request for a connection is detected the postmaster process spawns a new server process called postgres The server tasks postgres processes communicate with each other using semaphores and shared memory to ensure data integrity throughout concurrent data access Figure ref connection illustrates the interaction of the master process postmaster the server process postgres and a client application The client process can either be the psql frontend for interactive SQL queries or any user application implemented using the libpg library Note that applications implemented using ecpg the Postgres embedded SQL preprocessor for C also use this library Once a connection is established the client process can send a query to the backend server The query is transmitted using plain text 1 e there is no parsing done in the frontend client The server parses the query creates an execution plan executes the plan and returns the retrieved tuples to the client by transmitting them over the established connection The Parser Stage The parser stage consists of two parts The parser defined in gram y and scan l is built using the U
167. etre aa 208 Message Data Types eoe eU p et aep i 208 Message Formats iaa p pem det Ep eb emo ted de bet iie oe uds 209 26 Postgres Signals esee eisiese estre A opos eene ves ea osorioi 217 27 gcc Default Optimizations 4 eee esee esee esee esee eee enean etna etae tane en aetas etas etas tasto seta sto 219 MAT nn 220 BKI File Format 22 nre PO Re e tete rer 220 General Commands poa Oe ete eben euet 220 Macro Com ands xs ies ue ROO PU e PR IRE t sq DU P ERREPI A ERE IRIS 221 Debugging Commands rte repre tr rp utere eii eripe EES 222 Example ete e e Pare ee piter Repeated arena 222 29 Page HUES A H 223 Page Structure MP inn ets couuacs const cb eebdensacuects ENON toes SEAE Ee St dieta do o 223 A NO 224 BUB icu Reit Rep e e eo Up t RR e Dap daca nce las dance see duc ORE Tae 224 DG1 The CVS Repository 4 eere sees ette eren eene essen netos staat tas en setas etas e ease ease ee sesta seo 225 CVS Tree Organization at eo Ue HR NIRE IU de RR a o ERES 225 Getting The Source Via Anonymous CVS sss eene nennen 226 Getting The Source Via CVSuUp eseeeereessrseeserseeerseoresseveessereroreressercrsrererseuseerererssssesee 228 Preparing A CVSup Client System sess enne 228 Running a CV Sup Ch ht eee eret eee eb ee tte es 228 Installing CV Spira no 230 Installation from Sources i erer aeoe EE r
168. eturns tuples to the caller via elog NOTICE if ret SPI OK SELECT amp amp SPI processed gt 0 TupleDesc tupdesc SPI tuptable tupdesc 107 Chapter 14 Server Programming Interface SPITupleTable tuptable SPI tuptable char buf 8192 int i for ret 0 ret proc ret HeapTuple tuple tuptable vals ret for i 1 buf 0 0 i lt tupdesc natts sprintf buf strlen buf s s SPI getvalue tuple tupdesc i tupdesc natts elog NOTICE EXECQ s buf SPI finish return proc Now compile and create the function create function execq text int4 returns int4 as path to so language c vac select execq create table a x int4 0 execq 0 1 row vac insert into a values execq insert into a values 0 0 INSERT 167631 1 vac gt select execq select from a 0 NOTICE EXECQ 0 lt lt lt inserted by execq NOTICE EXECQ 1 lt lt lt value returned by execq and inserted by upper INSERT execq 1 1 row vac gt select execq select from a 10 NOTICE EXECO 0 NOTICE EXECO I NOTICE EXECQ 2 lt lt lt 0 2 only one tuple inserted as specified 3 lt lt lt 10 is max value only 3 is real of tuples vac gt delete from a DELETE 3 vac gt insert into a values execq select from a 0 1 INSERT 167712 1 vac gt select from a 108 Chapter 14 Server Programming Inte
169. executed SPI OK UPDATE if UPDATE was executed 89 Chapter 14 Server Programming Interface Description SPI_exec creates an execution plan parser planner optimizer and executes the query for tcount tuples Usage This should only be called from a connected procedure If tcount is zero then it executes the query for all tuples returned by the query scan Using tcount gt 0 you may restrict the number of tuples for which the query will be executed For example SPI_exec insert into table select from table 5 will allow at most 5 tuples to be inserted into table If execution of your query was successful then a non negative value will be returned Note You may pass many queries in one string or query string may be re written by RULEs SPI_exec returns the result for the last query executed The actual number of tuples for which the last query was executed is returned in the global variable SPI processed if not SPI OK UTILITY If SPI OK SELECT returned and SPI processed 0 then you may use global pointer SPITupleTable SPI tuptable to access the selected tuples Also NOTE that SPI finish frees and makes all SPITupleTables unusable See Memory management SPI exec may return one of the following negative values SPI ERROR ARGUMENT if query is NULL or tcount 0 SPI ERROR UNCONNECTED if procedure is unconnected SPI ERROR COPY if COPY TO FROM stdin SPI ERROR CURSOR if DECLARE CLOSE CURSOR FETCH SP
170. f the parser one querytree and the rewrite rules from the pg_rewrite catalog which are querytrees too with some extra information and creates zero or many querytrees as result So it s input and output are always things the parser itself could have produced and thus anything it sees is basically representable as an SQL statement Now what is a querytree It is an internal representation of an SQL statement where the single parts that built it are stored separately These querytrees are visible when starting the Postgres backend with debuglevel 4 and typing queries into the interactive backend interface The rule actions in the pg_rewrite system catalog are also stored as querytrees They are not formatted like the debug output but they contain exactly the same information Reading a querytree requires some experience and it was a hard time when I started to work on the rule system I can remember that I was standing at the coffee machine and I saw the cup in a targetlist water and coffee powder in a rangetable and all the buttons in a qualification expression Since SQL representations of querytrees are sufficient to understand the rule system this document will not teach how to read them It might help to learn it and the naming conventions are required in the later following descriptions The Parts of a Querytree When reading the SQL representations of the querytrees in this document it is necessary to be able to identify the parts the st
171. f libpq applications in src libpq examples including the source code for the three examples in this chapter Control and Initialization Environment Variables The following environment variables can be used to set up default values for an environment and to avoid hard coding database names into an application program Note Refer to the ibpq for a complete list of available connection options The following environment variables can be used to select default connection parameter values which will be used by PQconnectdb or PQsetdbLogin if no value is directly specified by the calling code These are useful to avoid hard coding database names into simple application programs Note libpq uses only environment variables or PQconnectdb conninfo style strings PGHOST sets the default server name If a non zero length string is specified TCP IP communication is used Without a host name libpq will connect using a local Unix domain socket PGPORT sets the default port or local Unix domain socket file extension for communicating with the Postgres backend PGDATABASE sets the default Postgres database name PGUSER sets the username used to connect to the database and for authentication PGPASSWORD sets the password used if the backend demands password authentication 138 Chapter 17 libpg C Binding PGREALM sets the Kerberos realm to use with Postgres if it is different from the local realm If PGREALM is set Postgres appli
172. f processing one SQL command not the whole string ReadyForQuery will always be sent whether processing terminates successfully or with an error NoticeResponse A warning message has been issued in relation to the query Notices are in addition to other responses ie the backend will continue processing the command A frontend must be prepared to accept ErrorResponse and NoticeResponse messages whenever it is expecting any other type of message Actually it is possible for NoticeResponse to arrive even when the frontend is not expecting any kind of message that is the backend is nominally idle In particular the backend can be commanded to terminate by its postmaster In that case it will send a NoticeResponse before closing the connection It is recommended that the frontend check for such asynchronous notices just before issuing any new command Also if the frontend issues any listen 1 commands then it must be prepared to accept NotificationResponse messages at any time see below Function Call A Function Call cycle is initiated by the frontend sending a FunctionCall message to the backend The backend then sends one or more response messages depending on the results of the function call and finally a ReadyForQuery response message ReadyForQuery informs the frontend that it may safely send a new query or function call The possible response messages from the backend are ErrorResponse An error has occurred 206 C
173. fication from a list of unhandled notification messages received from the backend Returns NULL if there are no pending notifications Once a notification is returned from PQnotifies it is considered handled and will be removed from the list of notifications PGnotify PQnotifies PGconn conn 126 Chapter 16 libpq typedef struct pgNotify char relname NAMEDATALEN name of relation containing data int be pid process id of backend PGnotify After processing a PGnotify object returned by PQnotifies be sure to free it with free to avoid a memory leak NOTE in Postgres 6 4 and later the be pid is the notifying backend s whereas in earlier versions it was always your own backend s PID The second sample program gives an example of the use of asynchronous notification PQnotifies does not actually read backend data it just returns messages previously absorbed by another libpq function In prior releases of libpq the only way to ensure timely receipt of NOTIFY messages was to constantly submit queries even empty ones and then check PQnotifies after each PQexec While this still works it is deprecated as a waste of processing power A better way to check for NOTIFY messages when you have no useful queries to make is to call POconsumelInput then check PQnotifies You can use select 2 to wait for backend data to arrive thereby using no CPU power unless there is something to do Note that this will wor
174. fies the message as an ASCII data row A prior RowDescription message defines the number of fields in the row and their data types Byten A bit map with one bit for each field in the row The 1st field corresponds to bit 7 MSB of the 1st byte the 2nd field corresponds to bit 6 of the 1st byte the 8th field corresponds to bit O LSB of the 1st byte the 9th field corresponds to bit 7 of the 2nd byte and so on Each bit is set if the value of the corresponding field is not NULL If the number of fields is not a multiple of 8 the remainder of the last byte in the bit map is wasted Then for each field with a non NULL value there is the following Int32 Specifies the size of the value of the field including this size Byten Specifies the value of the field itself in ASCII characters n is the above size minus 4 There is no trailing O in the field data the front end must add one if it wants one AuthenticationOk B Bytel R Identifies the message as an authentication request Int32 0 Specifies that the authentication was successful 209 Chapter 25 Frontend Backend Protocol AuthenticationKerberosV4 B Bytel CR Identifies the message as an authentication request Int32 1 Specifies that Kerberos V4 authentication is required AuthenticationKerberosV5 B Bytel CR Identifies the message as an authentication request Int32 2 Specifies that Kerberos V5 authentication is required
175. find the following section Pho AA SECTION gt xel Zu SECTION As you can see there aren t any default flags If I always wanted compiles of C code to use m486 fomit frame pointer I would change it to look like SECTION eci m486 fomit frame pointer SECTION If I wanted to be able to generate 386 code for another older linux box lying around I d have to make it look like this SECTION ECLI 1m386 m486 fomit frame pointer SECTION This will always omit frame pointers any will build 486 optimized code unless m386 is specified on the command line You can actually do quite a lot of customization with the specs file Always remember however that these changes are global and affect all users of the system 219 Chapter 28 Backend Interface Backend Interface BKI files are scripts that are input to the Postgres backend running in the special bootstrap mode that allows it to perform database functions without a database system already existing BKI files can therefore be used to create the database system in the first place initdb uses BKI files to do just that to create a database system However initdb s BKI files are generated internally It generates them using the files globall bki source and local1 template1 bki source which it finds in the Postgres library directory They get insta
176. fnumber Returns the field attribute index associated with the given field name int PgDatabase FieldNum const char field name is returned if the given name does not match any field FieldType Returns the field type associated with the given field index The integer returned is an internal coding of the type Field indices start at 0 Oid PgDatabase FieldType int field num 141 Chapter 17 libpg C Binding FieldType Returns the field type associated with the given field name The integer returned is an internal coding of the type Field indices start at 0 Oid PgDatabase FieldType const char field name FieldSize Returns the size in bytes of the field associated with the given field index Field indices start at O short PgDatabase FieldSize int field num Returns the space allocated for this field in a database tuple given the field number In other words the size of the server s binary representation of the data type 1 is returned if the field is variable size FieldSize Returns the size in bytes of the field associated with the given field index Field indices start at O short PgDatabase FieldSize const char field name Returns the space allocated for this field in a database tuple given the field name In other words the size of the server s binary representation of the data type 1 is returned if the field is variable size GetValue Returns a single field attribute value of one tuple of a PGresult
177. for more detailed information 1 You must modify axnet cnf so that elfodbc can find libodbc so the ODBC driver manager shared library This library is included with the ApplixWare distribution but axnet cnf needs to be modified to point to the correct location As root edit the file applixroot applix axdata axnet cnf a At the bottom of axnet cnf find the line that starts with HlibFor elfodbc ax b Change line to read libFor elfodbc applixroot applix axdata axshlib lib which will tell elfodbc to look in this directory for the ODBC support library If you have installed applix somewhere else change the path accordingly Create odbc ini as described above You may also want to add the flag TextAsLongVarchar 0 to the database specific portion of odbc ini so that text fields will not be shown as BLOB Testing ApplixWare ODBC Connections 1 Ze Bring up Applix Data Select the Postgres database of interest a Select Query gt Choose Server b Select ODBC and click Browse The database you configured in odbc ini should be shown Make sure that the Host field is empty if it is not axnet will try to contact axnet on another machine to look for the database c Select the database in the box that was launched by Browse then click OK d Enter username and password in the login identification dialog and click OK You should see Starting elfodbc server in the lower left corner of the data window If you get a
178. from a prior POsendQuery and return it NULL is returned when the query is complete and there will be no more results PGresult PQgetResult PGconn conn PQgetResult must be called repeatedly until it returns NULL indicating that the query is done If called when no query is active POgetResult will just return NULL at once Each non null result from PQgetResult should be processed using the same PGresult accessor functions previously described Don t forget to free each result object with PQclear when done with it Note that POgetResult will block only if a query is active and the necessary response data has not yet been read by POconsumelInput Using PQsendQuery and PQgetResult solves one of PQexec s problems if a query string contains multiple SQL commands the results of those commands can be obtained individually This allows a simple form of overlapped processing by the way the frontend can be handling the results of one query while the backend is still working on later queries in the same query string However calling PQgetResult will still cause the frontend to block until the backend completes the next SQL command This can be avoided by proper use of three more functions PQconsumelInput If input is available from the backend consume it int PQconsumeInput PGconn conn PQconsumelInput normally returns 1 indicating no error but returns 0 if there was some kind of trouble in which case PQerrorMessage is set Note that t
179. frontend connected to backend server And multiple connections can be established d frontend connected to multiple backend servers N SERVER Chapter 2 Architecture Chapter 3 Extending SQL An Overview In the sections that follow we will discuss how you can extend the Postgres SQL query language by adding functions types operators aggregates How Extensibility Works Postgres is extensible because its operation is catalog driven If you are familiar with standard relational systems you know that they store information about databases tables columns etc in what are commonly known as system catalogs Some systems call this the data dictionary The catalogs appear to the user as classes like any other but the DBMS stores its internal bookkeeping in them One key difference between Postgres and standard relational systems is that Postgres stores much more information in its catalogs not only information about tables and columns but also information about its types functions access methods and so on These classes can be modified by the user and since Postgres bases its internal operation on these classes this means that Postgres can be extended by users By comparison conventional database systems can only be extended by changing hardcoded procedures within the DBMS or by loading modules specially written by the DBMS vendor Postgres is also unlike most other data managers in that the server can inc
180. g documentation into a coherent documentation set the older versions will become obsolete and will be removed from the distribution However this will not happen immediately and will not happen to all documents at the same time To ease the transition and to help guide developers and writers we have defined a transition roadmap Here is the documentation plan for v6 5 234 Appendix DG2 Documentation Start compiling index information for the User s and Administrator s Guides Write more sections for the User s Guide covering areas outside the reference pages This would include introductory information and suggestions for approaches to typical design problems Merge information in the existing man pages into the reference pages and User s Guide Condense the man pages down to reminder information with references into the primary doc set Convert the new sgml reference pages to new man pages replacing the existing man pages Convert all source graphics to CGM format files for portability Currently we mostly have Applix Graphics sources from which we can generate gif output One graphic is only available in gif and ps and should be redrawn or removed Document Structure There are currently five separate documents written in DocBook Each document has a container source document which defines the DocBook environment and other document source files These primary source files are located in doc src sgml along with many
181. g containing the number of rows affected If the command was anything else it returns the empty string PQoidStatus Returns a string with the object id of the tuple inserted if the SQL command was an INSERT Otherwise returns an empty string char PQoidStatus PGresult res POprint Prints out all the tuples and optionally the attribute names to the specified output stream 122 Chapter 16 libpq void PQprint FILE fout output stream PGresult res PQprintOpt po struct POprintOpt pabool header print output field headings and row count pqbool align fill align the fields pabool standard old brain dead format pqbool html3 output html tables pqbool expanded expand tables pqbool pager use pager for output if needed char fieldSep field separator char tableOpt insert to HTML table char caption HTML caption char fieldName null terminated array of replacement field names This function is intended to replace PQprintTuples which is now obsolete The psql program uses PQprint to display query results POprintTuples Prints out all the tuples and optionally the attribute names to the specified output stream void PQprintTuples PGresult res FILE fout output stream int printAttName print attribute names or not int terseOutput delimiter bars or not int width width of column variable width if
182. g on the server The postmaster and backend have different roles but may be implemented by the same executable A frontend sends a startup packet to the postmaster This includes the names of the user and the database the user wants to connect to The postmaster then uses this and the information in the pg_hba conf 5 file to determine what further authentication information it requires the frontend to send if any and responds to the frontend accordingly The frontend then sends any required authentication information Once the postmaster validates this it responds to the frontend that it is authenticated and hands over the connection to a backend The backend then sends a message indicating successful startup normal case or failure for example an invalid database name Subsequent communications are query and result packets exchanged between the frontend and the backend The postmaster takes no further part in ordinary query result communication However the postmaster is involved when the frontend wishes to cancel a query currently being executed by its backend Further details about that appear below When the frontend wishes to disconnect it sends an appropriate packet and closes the connection without waiting for a response for the backend Packets are sent as a data stream The first byte determines what should be expected in the rest of the packet The exception is packets sent from a frontend to the postmaster which comprise a p
183. g the int ops collection as an object with an OID of 421 print out the class and then close it create pg opclass opcname name open pg opclass insert oid 421 int ops print close pg opclass 222 Chapter 29 Page Files A description of the database file default page format This section provides an overview of the page format used by Postgres classes User defined access methods need not use this page format In the following explanation a byte is assumed to contain 8 bits In addition the term item refers to data which is stored in Postgres classes Page Structure The following table shows how pages in both normal Postgres classes and Postgres index classes e g a B tree index are structured Table 29 1 Sample Page Layout itemPointerData filler itemData Unallocated Space ItemContinuationData Special Space ItemData 2 ItemData 1 ItemIdData PageHeaderData The first 8 bytes of each page consists of a page header PageHeaderData Within the header the first three 2 byte integer fields lower upper and special represent byte offsets to the start of unallocated space to the end of unallocated space and to the start of special space Special space is a region at the end of the page which is allocated at page initialization time and which contains information specific to an access method The last 2 bytes of the page header opaque encode the page size and information on the internal
184. ge as a response to an empty query string String Unused 212 Chapter 25 Frontend Backend Protocol EncryptedPasswordPacket F Int32 The size of the packet in bytes String The encrypted using crypt password ErrorResponse B Bytel E Identifies the message as an error String The error message itself FunctionCall F Bytel F Identifies the message as a function call String Unused Int32 Specifies the object ID of the function to call Int32 Specifies the number of arguments being supplied to the function Then for each argument there is the following Int32 Specifies the size of the value of the argument excluding this size Byten Specifies the value of the field itself in binary format n is the above size FunctionResultResponse B Bytel V Identifies the message as a function call result 213 Chapter 25 Frontend Backend Protocol Bytel G Specifies that a nonempty result was returned Int32 Specifies the size of the value of the result excluding this size Byten Specifies the value of the result itself in binary format n is the above size Byte1 0 Unused Strictly speaking FunctionResultResponse and FunctionVoidResponse are the same thing but with some optional parts to the message FunctionVoidResponse B Bytel V Identifies the message as a function call result Byte1 0 Specifies that an em
185. gml jadetex postgres tex jadetex postgres tex dvips postgres dvi Of course when you do this TeX will stop during the second run and tell you that its capacity has been exceeded This is as far as we can tell because of the way JadeTeX generates cross referencing information TeX can of course be compiled with larger data structure sizes The details of this will vary according to your installation 242 Appendix DG2 Documentation Alternate Toolsets sgml tools v2 x now supports jade and DocBook It may be the preferred toolset for working with SGML but we have not had a chance to evaluate the new package 243 Bibliography Selected references and readings for SQL and Postgres SQL Reference Books The Practical SQL Handbook Bowman et al 1993 Using Structured Query Language 3 Judity Bowman Sandra Emerson and Marcy Damovsky 0 201 44787 8 1996 Addison Wesley 1997 A Guide to the SQL Standard Date and Darwen 1997 A user s guide to the standard database language SQL 4 C J Date and Hugh Darwen 0 201 96426 0 1997 Addison Wesley 1997 An Introduction to Database Systems Date 1994 6 C J Date 1 1994 Addison Wesley 1994 Understanding the New SQL Melton and Simon 1993 A complete guide Jim Melton and Alan R Simon 1 55860 245 3 1993 Morgan Kaufmann 1993 Abstract Accessible reference for SQL features Principles of Database and Knowledge Base Systems Ullman 1988 Jeffre
186. gres backend and returns a result Query result handles start with the connection handle and add a period and a result number Note that lack of a Tcl error is not proof that the query succeeded An error message returned by the backend will be processed as a query result with failure status not by generating a Tcl error in pg exec 151 Chapter 18 pgtcl pg_result Name pg_result get information about a query result Synopsis pg_result resultHandle resultOption Inputs resultHandle The handle for a query result resultOption Specifies one of several possible options Options Status the status of the result error the error message if the status indicates error otherwise an empty string conn the connection that produced the result oid if the command was an INSERT the OID of the inserted tuple otherwise an empty string numTuples the number of tuples returned by the query numAttrs the number of attributes in each tuple assign arrayName assign the results to an array using subscripts of the form tupno attributeName assignbyidx arrayName appendstr assign the results to an array using the first attribute s value and the remaining attributes names as keys If appendstr is given then it is appended to each key In short all but the 152 Chapter 18 pgtcl first field of each tuple are stored into the array using subscripts of the form firstFieldV alue fieldNameAppen
187. gs shows the major entities and their relationships in the system catalogs Attributes that do not refer to other entities are not shown unless they are part of a primary key This diagram is more or less incomprehensible until you actually Chapter 3 Extending SOL An Overview start looking at the contents of the catalogs and see how they relate to each other For now the main things to take away from this diagram are as follows In several of the sections that follow we will present various join queries on the system catalogs that display information we need to extend the system Looking at this diagram should make some of these join queries which are often three or four way joins more understandable because you will be able to see that the attributes used in the queries form foreign keys in other classes Many different features classes attributes functions types access methods etc are tightly integrated in this schema A simple create command may modify many of these catalogs Types and procedures are central to the schema Note We use the words procedure and function more or less interchangably Nearly every catalog contains some reference to instances in one or both of these classes For example Postgres frequently uses type signatures e g of functions and operators to identify unique instances of other catalogs There are many attributes and relationships that have obvious meanings but there are many particular
188. h Postgres this takes one of the following forms jdbc postgresql database jdbc postgresql host database jdbc postgresql host port database where host The hostname of the server Defaults to localhost port The port number the server is listening on Defaults to the Postgres standard port number 5432 database The database name To connect you need to get a Connection instance from JDBC To do this you would use the DriverManager getConnection method Connection db DriverManager getConnection url user pwd Issuing a Query and Processing the Result Any time you want to issue SQL statements to the database you require a Statement instance Once you have a Statement you can use the executeQuery method to issue a query This will return a ResultSet instance which contains the entire result Using the Statement Interface The following must be considered when using the Statement interface You can use a Statement instance as many times as you want You could create one as soon as you open the connection and use it for the connections lifetime You have to remember that only one ResultSet can exist per Statement 186 Chapter 21 JDBC Interface If you need to perform a query while processing a ResultSet you can simply create and use another Statement If you are using Threads and several are using the database you must use a separate Statement for each thread Refer to the sections covering
189. hapter 25 Frontend Backend Protocol FunctionResultResponse The function call was executed and returned a result Function VoidResponse The function call was executed and returned no result ReadyForQuery Processing of the function call is complete ReadyForQuery will always be sent whether processing terminates successfully or with an error NoticeResponse A warning message has been issued in relation to the function call Notices are in addition to other responses ie the backend will continue processing the command A frontend must be prepared to accept ErrorResponse and NoticeResponse messages whenever it is expecting any other type of message Also if it issues any listen 1 commands then it must be prepared to accept NotificationResponse messages at any time see below Notification Responses If a frontend issues a listen 1 command then the backend will send a NotificationResponse message not to be confused with NoticeResponse whenever a notify 1 command is executed for the same notification name Notification responses are permitted at any point in the protocol after startup except within another backend message Thus the frontend must be prepared to recognize a NotificationResponse message whenever it is expecting any message Indeed it should be able to handle NotificationResponse messages even when it is not engaged in a query NotificationResponse A notify 1 command has been executed for a name for w
190. hared object file end in o If the file you specify is not a shared object the backend will hang 77 Chapter 12 Linking Dynamically Loaded Functions SunOS 4 x Solaris 2 x and HP UX Under SunOS 4 x Solaris 2 x and HP UX the simple object file must be created by compiling the source file with special compiler flags and a shared library must be produced The necessary steps with HP UX are as follows The z flag to the HP UX C compiler produces so called Position Independent Code PIC and the u flag removes some alignment restrictions that the PA RISC architecture normally enforces The object file must be turned into a shared library using the HP UX link editor with the b option This sounds complicated but is actually very simple since the commands to do it are just simple HP UX example CC z u c foo c ld b o foo sl foo o o oe H As with the so files mentioned in the last subsection the create function command must be told which file is the correct file to load i e you must give it the location of the shared library or sl file Under SunOS 4 x the commands look like simple SunOS 4 x example cc PIC c foo c ld dc dp Bdynamic o foo so foo o o oe zt and the equivalent lines under Solaris 2 x are simple Solaris 2 x example ca K BIC c foo c ld G Bdynamic o foo so foo o o oe zt or simple Solaris 2 x example gcc fPIC c foo c ld G Bdynamic o foo so foo o o oe dt When link
191. hash join is that the join operator can only return TRUE for pairs of left and right values that hash to the same hash code If two values get put in different hash buckets the join will never compare them at all implicitly assuming that the result of the join operator must be FALSE So it never makes sense to specify HASHES for operators that do not represent equality In fact logical equality is not good enough either the operator had better represent pure bitwise equality because the hash function will be computed on the memory representation of the values regardless of what the bits mean For example equality of time intervals is not bitwise equality the interval equality operator considers two time intervals equal if they have the same duration whether or not their endpoints are identical What this means is that a join using between interval fields would yield different results if implemented as a hash join than if implemented another way because a large fraction of the pairs that should match will hash to different values and will never be compared by the hash join But if the optimizer chose to use a different kind of join all the pairs that the equality operator says are equal will be 23 SORT1 Chapter 6 Extending SQL Operators found We don t want that kind of inconsistency so we don t mark interval equality as hashable There are also machine dependent ways in which a hash join might fail to do the right thing Fo
192. hat behaviour is that the parsetree for the INSERT does not reference the shoe relation in any variable The targetlist contains only constant values So there is no rule to apply and it goes down unchanged into execution and the row is inserted And so for the DELETE To change this we can define rules that modify the behaviour of non SELECT queries This is the topic of the next section Rules on INSERT UPDATE and DELETE Differences to View Rules Rules that are defined ON INSERT UPDATE and DELETE are totally different from the view rules described in the previous section First their CREATE RULE command allows more They can have no action They can have multiple actions The keyword INSTEAD is optional The pseudo relations NEW and OLD become useful They can have rule qualifications Second they don t modify the parsetree in place Instead they create zero or many new parsetrees and can throw away the original one How These Rules Work Keep the syntax CREATE RULE rule_name AS ON event TO object WHERE rule qualification DO INSTEAD action T actions NOTHING in mind In the following update rules means rules that are defined ON INSERT UPDATE or DELETE Update rules get applied by the rule system when the result relation and the commandtype of a parsetree are equal to the object and event given in the CREATE RULE command For update rules the rule system creates a list of parsetrees Initially the parse
193. he POLYGON type void exit nicely PGconn conn POfinish conn exit 1 main char pghost pgport pgoptions pgtty char dbName int nFields int i Ji int i_fnum d_fnum p_fnum PGconn conn PGresult res begin by setting the parameters for a backend connection if the parameters are null then the system will try to use reasonable defaults by looking up environment variables or failing that using hardwired constants x pghost NULL host name of the backend server pgport NULL port of the backend server pgoptions NULL special options to start up the backend server pgtty NULL debugging tty for the backend server dbName getenv USER change this to the name of your test database make a connection to the database conn PQsetdb pghost pgport pgoptions pgtty dbName check to see that the backend connection was successfully made if PQstatus conn CONNECTION BAD fprintf stderr Connection to database s failed n dbName fprintf stderr s PQerrorMessage conn exit nicely conn start a transaction block res PQexec conn BEGIN if res PQresultStatus res PGRES COMMAND OK 135 Chapter 16 libpq fprintf stderr BEGIN command failed n Poclear res exit nicely conn should PQclear PGresult whenever it is no longer needed to avoid memory leaks Poclear r
194. he other hand the object file must be postprocessed a bit before it can be loaded into Postgres We hope that the large increase in speed and reliability will make up for the slight decrease in convenience You should expect to read and reread and re reread the manual pages for the C compiler cc 1 and the link editor 1d 1 if you have specific questions In addition the regression test suites in the directory PGROOT src regress contain several working examples of this process If you copy what these tests do you should not have any problems The following terminology will be used below Dynamic loading is what Postgres does to an object file The object file is copied into the running Postgres server and the functions and variables within the file are made available to the functions within the Postgres process Postgres does this using the dynamic loading mechanism provided by the operating system Loading and link editing is what you do to an object file in order to produce another kind of object file e g an executable program or a shared library You perform this using the link editing program 1d 1 The following general restrictions and notes also apply to the discussion below Paths given to the create function command must be absolute paths 1 e start with that refer to directories visible on the machine on which the Postgres server is running Tip Relative paths do in fact work but are relative to the directory whe
195. he result does not say whether any input data was actually collected After calling PQconsumeInput the application may check PQisBusy and or PQnotifies to see if their state has changed PQconsumelInput may be called even if the application is not prepared to deal with a result or notification just yet The routine will read available data and save it in a buffer thereby 124 Chapter 16 libpq causing a select 2 read ready indication to go away The application can thus use PQconsumelInput to clear the select condition immediately and then examine the results at leisure PQisBusy Returns TRUE if a query is busy that is POgetResult would block waiting for input A FALSE return indicates that POgetResult can be called with assurance of not blocking int PQisBusy PGconn conn PQisBusy will not itself attempt to read data from the backend therefore PQOconsumelInput must be invoked first or the busy state will never end PQsocket Obtain the file descriptor number for the backend connection socket A valid descriptor will be gt 0 a result of 1 indicates that no backend connection is currently open int POsocket PGconn conn PQsocket should be used to obtain the backend socket descriptor in preparation for executing select 2 This allows an application to wait for either backend responses or other conditions If the result of select 2 indicates that data can be read from the backend socket then PQconsumelInput should be cal
196. he tables tries to use a NULL condition on many of the database columns and Postgres does not currently allow this option To get around this problem you can do the following Modifying the ApplixWare Demo 1 Copy opt applix axdata eng Demos sqldemo am to a local directory 2 Edit this local copy of sqldemo am a Search for null clause NULL b Change this to null clause Start Applix Macro Editor Open the sqldemo am file from the Macro Editor Select File gt Compile and Save Exit Macro Editor Start Applix Data Select gt Run Macro SQ Os cb ON Up dee opo Enter the value sqldemo then click OK You should see the progress in the status line of the data window in the lower left corner 10 You should now be able to access the demo tables 182 Chapter 20 ODBC Interface Useful Macros You can add information about your database login and password to the standard Applix startup macro file This is an example axhome macros login am file macro login Set set system vara sql _usernameo tgl Set system varG sql passwde no way endmacro Caution You should be careful about the file protections on any file containing username and password information Supported Platforms psqlODBC has been built and tested on Linux There have been reports of success with FreeBSD and with Solaris There are no known restrictions on the basic code for other platforms which already support Postgres 183
197. hich a previous listen 1 command was executed Notifications may be sent at any time It may be worth pointing out that the names used in listen and notify commands need not have anything to do with names of relations tables in the SQL database Notification names are simply arbitrarily chosen condition names Cancelling Requests in Progress During the processing of a query the frontend may request cancellation of the query by sending an appropriate request to the postmaster The cancel request is not sent directly to the backend for reasons of implementation efficiency we don t want to have the backend constantly checking for new input from the frontend during query processing Cancel requests should be relatively infrequent so we make them slightly cumbersome in order to avoid a penalty in the normal case To issue a cancel request the frontend opens a new connection to the postmaster and sends a CancelRequest message rather than the StartupPacket message that would ordinarily be sent across a new connection The postmaster will process this request and then close the connection For security reasons no direct reply is made to the cancel request message 207 Chapter 25 Frontend Backend Protocol A CancelRequest message will be ignored unless it contains the same key data PID and secret key passed to the frontend during connection startup If the request matches the PID and secret key for a currently executing backend the postm
198. hrinks to roughly 50MB of space when the sources are removed Linux installation 1 Install Modula 3 a Pick up the Modula 3 distribution from Polytechnique Montr al http m3 polymtl ca m3 who are actively maintaining the code base originally developed by the DEC Systems Research Center http www research digital com SRC modula 3 html home html The PM3 RPM distribution is roughly 30MB compressed At the time of writing the 1 1 10 1 release installed cleanly on RH 5 2 whereas the 1 1 11 1 release is apparently built for another release RH 6 0 and does not run on RH 5 2 Tip This particular rom packaging has many RPM files so you will likely want to place them into a separate directory Install the Modula 3 rpms rpm Uvh pm3 rpm 231 Appendix DG1 The CVS Repository Unpack the cvsup distribution cd usr local src tar zxf cvsup 16 0 tar gz Build the cvsup distribution suppressing the GUI interface feature to avoid requiring X11 libraries make M3FLAGS DNOGUI and if you want to build a static binary to move to systems which may not have Modula 3 installed try make M3FLAGS DNOGUI DSTATIC Install the built binary make M3FLAGS DNOGUI DSTATIC install 232 Appendix DG2 Documentation The purpose of documentation is to make Postgres easier to learn use and develop The documentation set should describe the Postgres system language and interfaces It should be able to a
199. if the buffer offered by the caller is too small to hold a line sent by the backend then a partial data line will be returned This can be detected by testing whether the last returned byte is n or not The returned string is not null terminated If you want to add a terminating null be sure to pass a bufsize one smaller than the room actually available PQputline Sends a null terminated string to the backend server Returns 0 if OK EOF if unable to send the string int PQputline PGconn conn char string Note the application must explicitly send the two characters on a final line to indicate to the backend that it has finished sending its data PQputnbytes Sends a non null terminated string to the backend server Returns 0 if OK EOF if unable to send the string int POputnbytes PGconn conn const char buffer int nbytes This is exactly like PQputline except that the data buffer need not be null terminated since the number of bytes to send is specified directly PQendcopy Syncs with the backend This function waits until the backend has finished the copy It should either be issued when the last string has been sent to the backend using PQputline or when the last string has been received from the backend using PGgetline It must be issued or the backend may get out of sync with the frontend Upon return from this function the backend is ready to receive the next query The return value is 0 on successful completio
200. ifications non NULL if tuple is not NULL and the modify was successful NULL only if tuple is NULL SPI_result SPI ERROR ARGUMENT if rel is NULL or tuple is NULL or natts 0 or attnum is NULL or Values is NULL SPI_ERROR_NOATTRIBUTE if there is an invalid attribute number in attnum attnum lt 0 or gt number of attributes in tuple 96 Chapter 14 Server Programming Interface Description SPI_modifytuple Modifies a tuple in upper Executor context See the section on Memory Management Usage If successful a pointer to the new tuple is returned The new tuple is allocated in upper Executor context see Memory management Passed tuple is not changed SPI fnumber Name SPI fnumber Finds the attribute number for specified attribute Synopsis SPI fnumber tupdesc fname Inputs TupleDesc tupdesc Input tuple description char fname Field name Outputs int Attribute number Valid one based index number of attribute SPI ERROR NOATTRIBUTE if the named attribute is not found Description SPI fnumber returns the attribute number for the attribute with name in fname Usage Attribute numbers are 1 based 97 Chapter 14 Server Programming Interface SPI_fname Name SPI fname Finds the attribute name for the specified attribute Synopsis SPI fname tupdesc fname Inputs TupleDesc tupdesc Input tuple description char fnumber Attribute number Outputs char Attribute name
201. im to manually update the shoelace view Instead we setup two little tables one where he can insert the items from the partlist and one with a special trick The create commands for anything are CREATE TABLE shoelace arrive arr name char 10 arr quant integer CREATE TABLE shoelace ok ok name char 10 ok quant integer CREATE RULE shoelace ok ins AS ON INSERT TO shoelace ok DO INSTEAD UPDATE shoelace SET Sl avail sl avail NEW ok quant WHERE sl name NEW ok name Now Al can sit down and do whatever until al bundy SELECT FROM shoelace arrive arr name arr quant 43 Chapter 8 The Postgres Rule System s13 10 sl6 20 S18 20 3 rows is exactly that what s on the part list We take a quick look at the current data al bundy SELECT FROM shoelace ORDER BY sl name Sl name sl avail sl color sl len sl unit sl len cm sll 5 black 80 cm 80 S12 6 black 100 cm 100 s17 6 brown 60 cm 60 s13 0 black 35 inch 88 9 s14 8 black 40 inch 101 6 S18 1 brown 40 inch 101 6 S15 4 brown 1 m 100 sl6 0 brown 0 9 m 90 8 rows move the arrived shoelaces in al bundy INSERT INTO shoelace ok SELECT FROM shoelace arrive and check the results al bundy SELECT FROM shoelace ORDER BY sl name Sl name sl avail sl color sl len sl unit sl len cm sll 5 bla
202. in the example Should be changed to make sense Thomas 1998 08 04 Macro Commands DEFINE FUNCTION macro_name AS rettype function_name args Define a function prototype for a function named macro_name which has its value of type rettype computed from the execution function_name with the arguments args declared in a C like manner DEFINE MACRO macro_name FROM FILE filename Define a macro named macro_name which has its value read from the file called filename 221 Chapter 28 Backend Interface Debugging Commands Note This section on debugging commands was commented out in the original documentation Thomas 1998 08 05 Randomly print the open class m 1 Toggle display of time information m0 Set retrievals to now m 1 Jan 1 01 00 00 1988 Set retrievals to snapshots of the specfied time m 2 Jan 1 01 00 00 1988 Feb 1 01 00 00 1988 Set retrievals to ranges of the specified times Either time may be replaced with space if an unbounded time range is desired amp A classname natts namel typel name2 type2 Add natts attributes named namel name2 etc of types typel type2 etc to the class classname amp RR oldclassname newclassname Rename the oldclassname class to newclassname amp RA classname oldattname newattname classname oldattname newattname Rename the oldattname attribute in the class named classname to newattname Example The following set of commands will create the pg opclass class containin
203. ing Postgres v6 5 the notation for flagging commands is not universally consistant throughout the documentation set Please report problems to the Documentation Mailing List mailto docs postgresql org Y2K Statement Author Written by Thomas Lockhart mailto lockhartOalumni caltech edu on 1998 10 22 The PostgreSQL Global Development Team provides the Postgres software code tree as a public service without warranty and without liability for it s behavior or performance However at the time of writing The author of this statement a volunteer on the Postgres support team since November 1996 is not aware of any problems in the Postgres code base related to time transitions around Jan 1 2000 Y2K The author of this statement is not aware of any reports of Y2K problems uncovered in regression testing or in other field use of recent or current versions of Postgres We might have expected to hear about problems if they existed given the installed base and the active participation of users on the support mailing lists Chapter 1 Introduction To the best of the author s knowledge the assumptions Postgres makes about dates specified with a two digit year are documented in the current User s Guide http www postgresql org docs user datatype htm in the chapter on data types For two digit years the significant transition year is 1970 not 2000 e g 70 01 01 is interpreted as 1970 01 01 whereas 69 01 01 is interpreted as 2069 01
204. ing shared libraries you may have to specify some additional shared libraries typically system libraries such as the C and math libraries on your ld command line 78 Chapter 13 Triggers Postgres has various client interfaces such as Perl Tel Python and C as well as two Procedural Languages PL It is also possible to call C functions as trigger actions Note that STATEMENT level trigger events are not supported in the current version You can currently specify BEFORE or AFTER on INSERT DELETE or UPDATE of a tuple as a trigger event Trigger Creation If a trigger event occurs the trigger manager called by the Executor initializes the global structure TriggerData CurrentTriggerData described below and calls the trigger function to handle the event The trigger function must be created before the trigger is created as a function taking no arguments and returns opaque The syntax for creating triggers is as follows CREATE TRIGGER lt trigger name gt lt BEFORE AFTER gt lt INSERT DELETE UPDATE gt ON lt relation name gt FOR EACH lt ROW STATEMENT gt EXECUTE PROCEDURE procedure name gt function args The name of the trigger is used if you ever have to delete the trigger It is used as an argument to the DROP TRIGGER command The next word determines whether the function is called before or after the event The next element of the command determines on what event s will trigger the function Multiple ev
205. ion INSERT UPDATE DELETE or SELECT for the final result row should be executed or not It is the WHERE clause of an SQL statement 29 Chapter 8 The Postgres Rule System the others The other parts of the querytree like the ORDER BY clause arent of interest here The rule system substitutes entries there while applying rules but that doesn t have much to do with the fundamentals of the rule system GROUP BY is a special thing when it appears in a view definition and still needs to be documented Views and the Rule System Implementation of Views in Postgres Views in Postgres are implemented using the rule system In fact there is absolutely no difference between a CREATE VIEW myview AS SELECT FROM mytab compared against the two commands CREATE TABLE myview same attribute list as for mytab CREATE RULE RETmyview AS ON SELECT TO myview DO INSTEAD SELECT FROM mytab because this is exactly what the CREATE VIEW command does internally This has some side effects One of them is that the information about a view in the Postgres system catalogs is exactly the same as it is for a table So for the query parsers there is absolutely no difference between a table and a view They are the same thing relations That is the important one for now How SELECT Rules Work Rules ON SELECT are applied to all queries as the last step even if the command given is an INSERT UPDATE or DELETE And they have different semantics fr
206. ion Use the accessor functions below to get at the contents of PGresult Avoid directly referencing the fields of the PGresult structure because they are subject to change in the future Beginning in Postgres release 6 4 the definition of struct PGresult is not even provided in libpq fe h If you have old code that accesses PGresult fields directly you can keep using it by including libpq int h too but you are encouraged to fix the code soon PQresultStatus Returns the result status of the query PQresultStatus can return one of the following values PGRES EMPTY QUERY PGRES COMMAND OK the query was a command returning no data PGRES TUPLES OK the query successfully returned tuples PGRES COPY OUT Copy Out from server data transfer started PGRES COPY IN Copy In to server data transfer started PGRES BAD RESPONSE an unexpected response was received PGRES NONFATAL ERROR PGRES FATAL ERROR If the result status is PGRES TUPLES OK then the routines described below can be used to retrieve the tuples returned by the query Note that a SELECT that happens to retrieve zero tuples still shows PGRES TUPLES OK PGRES COMMAND OK is for commands that can never return tuples 120 Chapter 16 libpq PQresStatus Converts the enumerated type returned by PQresultStatus into a string constant describing the status code const char PQresStatus ExecStatusType status Older code may perform this same oper
207. iption pg lo unlink deletes the specified large object Usage pg lo import Name pg lo import import a large object from a Unix file Synopsis pg lo import conn filename Inputs conn Specifies a valid database connection filename Unix file name Outputs None XXX Does this return a lobjId Is that the same as the objOid in other calls thomas 1998 01 11 Description pg lo import reads the specified file and places the contents into a large object Usage pg lo import must be called within a BEGIN END transaction block 163 Chapter 18 pgtcl pg lo export Name pg lo export export a large object to a Unix file Synopsis pg lo export conn lobjld filename Inputs conn Specifies a valid database connection lobjId Large object identifier XXX Is this the same as the objOid in other calls thomas 1998 01 11 filename Unix file name Outputs None XXX Does this return a lobjId Is that the same as the objOid in other calls thomas 1998 01 11 Description pg lo export writes the specified large object into a Unix file Usage pg lo export must be called within a BEGIN END transaction block 164 Chapter 19 ecpg Embedded SQL in C This describes an embedded SQL in C package for Postgres It is written by Linus Tolke mailto linus epact se and Michael Meskes mailto meskes O postgresql org Note Permission is granted to copy and use in the same way as you are allowed to copy
208. is full but the terminating newline has not yet been read Notice that the application must check to see if a new line consists of a single period which indicates that the backend server has finished sending the results of the copy Therefore if the application ever expects to receive lines that are more than length 1 characters long the application must be sure to check the return value of PgDatabase GetLine very carefully PgDatabase PutLine Sends a null terminated string to the backend server void PgDatabase PutLine char string on The application must explicitly send a single period character to indicate to the backend that it has finished sending its data PgDatabase EndCopy syncs with the backend int PgDatabase EndCopy This function waits until the backend has finished processing the copy It should either be issued when the last string has been sent to the backend using PgDatabase PutLine or when the last string has been received from the backend using PgDatabase GetLine It must be issued or the backend may get out of sync with the frontend Upon return from this function the backend is ready to receive the next query The return value is 0 on successful completion nonzero otherwise As an example PgDatabase data data exec create table foo a int4 b char16 d float8 data exec copy foo from stdin data putline 3letHello World et4 5 en data putline 4letGoodbye World et7 11 en
209. it encounters an abort during execution of a function or trigger procedure is to write some additional DEBUG level log messages telling in which function and where line number and type of statement this happened Examples Here are only a few functions to demonstrate how easy PL pgSQL functions can be written For more complex examples the programmer might look at the regression test for PL pgSQL One painful detail of writing functions in PL pgSQL is the handling of single quotes The functions source text on CREATE FUNCTION must be a literal string Single quotes inside of 69 Chapter 11 Procedural Languages literal strings must be either doubled or quoted with a backslash We are still looking for an elegant alternative In the meantime doubling the single qoutes as in the examples below should be used Any solution for this in future versions of Postgres will be upward compatible Some Simple PL pgSQL Functions The following two PL pgSQL functions are identical to their counterparts from the C language function discussion CREATE FUNCTION add one int4 RETURNS int4 AS BEGIN RETURN 1 1 END LANGUAGE plpgsql CREATE FUNCTION concat text text text RETURNS text AS BEGIN RETURN 1 2 END LANGUAGE plpgsqdl PL pgSQL Function on Composite Type Again it is the PL pgSQL equivalent to the example from The C functions CREATE FUNCTION Cc overpaid EMP int4 RETURNS bool AS DECLARE emprec ALIAS FOR
210. ition above the user query will be rewritten to the following form Note that the rewriting is done on the internal representation of the user query handed back by the parser stage but the derived new data structure will represent the following query select s sname from supplier s sells se part p where s sno se sno and p pno se pno and s sname lt gt Smith 194 Chapter 22 Overview of PostgreSQL Internals Planner Optimizer The task of the planner optimizer is to create an optimal execution plan It first combines all possible ways of scanning and joining the relations that appear in a query All the created paths lead to the same result and it s the task of the optimizer to estimate the cost of executing each path and find out which one is the cheapest Generating Possible Plans The planner optimizer decides which plans should be generated based upon the types of indices defined on the relations appearing in a query There is always the possibility of performing a sequential scan on a relation so a plan using only sequential scans is always created Assume an index is defined on a relation for example a B tree index and a query contains the restriction relation attribute OPR constant If relation attribute happens to match the key of the B tree index and OPR is anything but lt gt another plan is created using the B tree index to scan the relation If there are further indices present and the restrictions in the
211. iven name arrives from the backend This occurs when any Postgres client application issues a NOTIFY command referencing that name Note that the name can be but does not have to be that of an existing relation in the database The command string is executed from the Tcl idle loop That is the normal idle state of an application written with Tk In non Tk Tcl shells you can execute update or vwait to cause the idle loop to be entered You should not invoke the SQL statements LISTEN or UNLISTEN directly when using pg listen Pgtcl takes care of issuing those statements for you But if you want to send a NOTIFY message yourself invoke the SQL NOTIFY statement using pg exec 155 Chapter 18 pgtcl pg_lo_creat Name pg_lo_creat create a large object Synopsis pg_lo creat conn mode Inputs conn Specifies a valid database connection mode Specifies the access mode for the large object Outputs objOid The oid of the large object created Description pg_lo_creat creates an Inversion Large Object Usage mode can be any OR ing together of INV READ INV_WRITE and INV ARCHIVE The OR delimiter character is pg lo creat conn INV READ INV WRITE 156 Chapter 18 pgtcl pg lo open Name pg lo open open a large object Synopsis pg lo open conn objOid mode Inputs conn Specifies a valid database connection objOid Specifies a valid large object oid mode Specifies the access mode for the large objec
212. ization clause can result in backend crashes subtly wrong output or other Bad Things You can always leave out an optimization clause if you are not sure about it the only consequence is that queries might run slower than they need to Additional optimization clauses might be added in future versions of Postgres The ones described here are all the ones that release 6 5 understands COMMUTATOR The COMMUTATOR clause if provided names an operator that is the commutator of the operator being defined We say that operator A is the commutator of operator B if x A y equals y B x for all possible input values x y Notice that B is also the commutator of A For example operators lt and gt for a particular datatype are usually each others commutators and operator is usually commutative with itself But operator is usually not commutative with anything The left argument type of a commuted operator is the same as the right argument type of its commutator and vice versa So the name of the commutator operator is all that Postgres needs to be given to look up the commutator and that s all that need be provided in the COMMUTATOR clause When you are defining a self commutative operator you just do it When you are defining a pair of commutative operators things are a little trickier how can the first one to be defined refer to the other one which you haven t defined yet There are two solutions to this prob
213. k OK whether you use PQsendQuery PQgetResult or plain old PQexec for queries You should however remember to check PQnotifies after each PQgetResult or PQexec to see if any notifications came in during the processing of the query Functions Associated with the COPY Command The COPY command in Postgres has options to read from or write to the network connection used by libpq Therefore functions are necessary to access this network connection directly so applications may take advantage of this capability These functions should be executed only after obtaining a PGRES COPY OUT or PGRES COPY IN result object from PQexec or PQgetResult PQgetline Reads a newline terminated line of characters transmitted by the backend server into a buffer string of size length int PQgetline PGconn conn char string int length Like fgets 3 this routine copies up to length 1 characters into string It is like gets 3 however in that it converts the terminating newline into a null character PQgetline returns EOF at EOF 0 if the entire line has been read and 1 if the buffer is full but the terminating newline has not yet been read Notice that the application must check to see if a new line consists of the two characters which indicates that the backend server has finished sending the results of the copy command If the application might receive lines that are more than length 1 characters long care is needed to be sure one recognizes the
214. ks And as soon as the user will notice that the secretary opened his phone_number view he can REVOKE his access Immediately any access to the secretaries view will fail Someone might think that this rule by rule checking is a security hole but in fact it isn t If this would not work the secretary could setup a table with the same columns as phone_number and copy the data to there once per day Then it s his own data and he can grant access to everyone he wants A GRANT means I trust you If someone you trust does the thing above it s time to think it over and then REVOKE This mechanism does also work for update rules In the examples of the previous section the owner of the tables in Al s database could GRANT SELECT INSERT UPDATE and DELETE on the shoelace view to al But only SELECT on shoelace_log The rule action to write log entries will still be executed successfull And Al could see the log entries But he cannot create fake entries nor could he manipulate or remove existing ones Warning GRANT ALL currently includes RULE permission This means the granted user could drop the rule do the changes and reinstall it think this should get changed quickly 48 Chapter 8 The Postgres Rule System Rules versus Triggers Many things that can be done using triggers can also be implemented using the Postgres rule system What currently cannot be implemented by rules are some kinds of constraints It is possible to plac
215. l gov contains good information on GiST Hopefully we will learn more in the future and update this information thomas 1998 03 01 Well I can t say I quite understand what s going on but at least I almost succeeded in porting GiST examples to linux The GiST access method is already in the postgres tree src backend access gist Examples at Berkeley ftp s2k ftp cs berkeley edu pub gist pggist pggist tgz come with an overview of the methods and demonstrate spatial index mechanisms for 2D boxes polygons integer intervals and text see also GiST at Berkeley http gist cs berkeley edu 8000 gist In the box example we are supposed to see a performance gain when using the GiST index it did work for me but I do not have a reasonably large collection of boxes to check that Other examples also worked except polygons I got an error doing test create index pix on polytmp test using gist p box gist poly ops with islossy ERROR cannot open pix PostgreSQL 6 3 Sun Feb 1 14 57 30 EST 1998 I could not get sense of this error message it appears to be something we d rather ask the developers about see also Note 4 below What I would suggest here is that someone of you linux guys linux gcc fetch the original sources quoted above and apply my patch see attachment and tell us what you feel about it Looks cool to me but I would not like to hold it up while there are so many competent people around A few notes on the so
216. lace arrive arr quant FROM WHERE AND Sl color shoelace sl color sl len shoelace sl len sl unit shoelace sl unit shoelace arrive shoelace arrive shoelace ok shoelace ok shoelace ok OLD shoelace ok NEW shoelace shoelace shoelace OLD shoelace NEW shoelace data showlace data bpchareq shoelace sl name showlace arrive arr name bpchareq shoelace data sl name shoelace sl name Again it s an INSTEAD rule and the previous parsetree is trashed Note that this query sill uses the view shoelace But the rule system isn t finished with this loop so it continues and applies the rule RETshoelace on it and we get UPDATE FROM WHERE AND shoelace data SET Sl name s sl name Sl avail int4pl s sl avail shoelace arrive arr quant sl color S sl color sl len s sl len sl unit s sl unit shoelace arrive shoelace arrive shoelace ok shoelace ok shoelace ok OLD shoelace ok NEW shoelace shoelace shoelace OLD shoelace NEW shoelace data showlace data shoelace OLD shoelace NEW shoelace data s unit u bpchareq s sl name showlace arrive arr name bpchareq shoelace data sl name s sl name Again an update rule has been applied and so the wheel turns on and we are in rewrite round 3 This time rule log shoelace gets applied what produces the extra parsetree INSERT INTO shoelace log SELECT S sl name int4pl s sl avail shoelace arrive arr quant getpgusername date
217. ld excluding this size Byten Specifies the value of the field itself in binary format n is the above size CancelRequest F Int32 16 The size of the packet in bytes Int32 80877 102 The cancel request code The value is chosen to contain 1234 in the most significant 16 bits and 5678 in the least 16 significant bits To avoid confusion this code must not be the same as any protocol version number Int32 The process ID of the target backend Int32 The secret key for the target backend 211 Chapter 25 Frontend Backend Protocol CompletedResponse B Bytel C Identifies the message as a completed response String The command tag This is usually but not always a single word that identifies which SQL command was completed CopyDataRows B amp F This is a stream of rows where each row is terminated by a Bytel n This is then followed by the sequence Byte1 W Bytel Bytel Wn CopyInResponse B Bytel G Identifies the message as a Start Copy In response The frontend must now send a CopyDataRows message CopyOutResponse B Bytel H Identifies the message as a Start Copy Out response This message will be followed by a CopyDataRows message CursorResponse B Bytel P Identifies the message as a cursor response String The name of the cursor This will be blank 1f the cursor is implicit EmptyQueryResponse B Bytel TI Identifies the messa
218. led to read the data after which PQisBusy PQgetResult and or PQnotifies can be used to process the response A typical frontend using these functions will have a main loop that uses select 2 to wait for all the conditions that it must respond to One of the conditions will be input available from the backend which in select s terms is readable data on the file descriptor identified by PQsocket When the main loop detects input ready it should call POconsumelInput to read the input It can then call PQisBusy followed by PQgetResult if PQisBusy returns FALSE It can also call PQnotifies to detect NOTIFY messages see Asynchronous Notification below A frontend that uses PQsendQuery PQgetResult can also attempt to cancel a query that is still being processed by the backend PQrequestCancel Request that Postgres abandon processing of the current query int PQrequestCancel PGconn conn The return value is TRUE if the cancel request was successfully dispatched FALSE if not If not PQerrorMessage tells why not Successful dispatch is no guarantee that the request will have any effect however Regardless of the return value of PQrequestCancel the application must continue with the normal result reading sequence using PQgetResult If the cancellation is effective the current query will terminate early and return an error result If the cancellation fails say because the backend was already done processing the query then there will be
219. lem One way is to omit the COMMUTATOR clause in the first operator that you define and then provide one in the second operator s definition Since Postgres knows that commutative operators come in pairs when it sees the second definition it will automatically go back and fill in the missing COMMUTATOR clause in the first definition The other more straightforward way is just to include COMMUTATOR clauses in both definitions When Postgres processes the first definition and realizes that COMMUTATOR refers to a non existent operator the system will make a dummy entry for that operator in the system s pg_operator table This dummy entry will have valid data only for the operator name left and right argument types and result type since that s all that Postgres can deduce at this point The first operator s catalog entry will link to this dummy entry Later when you define the second operator the system updates the dummy entry with the additional information from the second definition If you try to use the dummy operator before it s been filled in you ll just get an error message Note this procedure did not work reliably in Postgres versions before 6 5 but it is now the recommended way to do things 21 Chapter 6 Extending SOL Operators NEGATOR The NEGATOR clause if provided names an operator that is the negator of the operator being defined We say that operator A is the negator of operator B if both return boolean re
220. les These operations do not correspond to user qualifications in SQL queries they are administrative routines used by the access methods internally 53 Chapter 9 Interfacing Extensions To Indices In order to manage diverse support routines consistently across all Postgres access methods pg_am includes an attribute called amsupport This attribute records the number of support routines used by an access method For B trees this number is one the routine to take two keys and return 1 0 or 1 depending on whether the first key is less than equal to or greater than the second Note Strictly speaking this routine can return a negative number lt 0 0 or a non zero positive number gt 0 The amstrategies entry in pg_am is just the number of strategies defined for the access method in question The procedures for less than less equal and so on don t appear in pg_am Similarly amsupport is just the number of support routines required by the access method The actual routines are listed elsewhere The next class of interest is pg_opclass This class exists only to associate a name and default type with an oid In pg_amop every B tree operator class has a set of procedures one through five above Some existing opclasses are int2_ops int4_ops and oid_ops You need to add an instance with your opclass name for example complex_abs_ops to pg_opclass The oid of this instance is a foreign key in other classes INSERT IN
221. llation 1 Specify the with odbc command line argument for src configure configure with odbc make 2 Rebuild the Postgres distribution o make install oe ol Once configured the ODBC driver will be built and installed into the areas defined for the other components of the Postgres system The installation wide ODBC configuration file will be placed into the top directory of the Postgres target tree POSTGRESDIR This can be overridden from the make command line as make ODBCINST filename install Pre v6 4 Integrated Installation If you have a Postgres installation older than v6 4 you have the original source tree available and you want to use the newest version of the ODBC driver then you may want to try this form of installation 1 Copy the output tar file to your target system and unpack it into a clean directory 2 From the directory containing the sources type configure make make POSTGRESDIR PostgresTopDir install AP oP oe 3 If you would like to install components into different trees then you can specify various destinations explicitly make BINDIR bindir LIBDIR libdir HEADERDIR headerdir ODBCINST instfile install Standalone Installation A standalone installation is not integrated with or built on the normal Postgres distribution It should be best suited for building the ODBC driver for multiple heterogeneous clients who do not have a locally installed Postgres source tree
222. lled there as part of installing Postgres These source files get build as part of the Postgres build process by a build program called genbki genbki takes as input Postgres source files that double as genbki input that builds tables and C header files that describe those tables Related information may be found in documentation for initdb createdb and the SQL command CREATE DATABASE BKI File Format The Postgres backend interprets BKI files as described below This description will be easier to understand if the globall bki source file is at hand as an example As explained above this source file isn t quite a BKI file but you ll be able to guess what the resulting BKI file would be anyway Commands are composed of a command name followed by space separated arguments Arguments to a command which begin with a are treated specially If are the first two characters then the first is ignored and the argument is then processed normally If the is followed by space then it is treated as a NULL value Otherwise the characters following the are interpreted as the name of a macro causing the argument to be replaced with the macro s value It is an error for this macro to be undefined Macros are defined using define macro macro name macro value and are undefined using undefine macro macro name and redefined using the same syntax as define Lists of general commands and macro commands follow General Commands OPE
223. long sqlcode struct int sglerrml char sqlerrmc 70 sqlerrm char sqglerrp 8 long sqlerrd 6 0 empty 1 empty 2 number of rows processed in an INSERT UPDATE or DELETE statement 3 empty 4 empty 5 empty char sqlwarn 8 0 set to W if at least one other is W 1 if W at least one character string value was truncated when it was stored into a host variable 2 empty x 3 empty 4 empty 5 empty 6 empty 7 empty c sqlext 8 Sqica 166 Chapter 19 ecpg Embedded SOL in C If an error occured in the last SQL statement then sqlca sqlcode will be non zero If sqlca sqlcode is less that O then this is some kind of serious error like the database definition does not match the query given If it is bigger than 0 then this is a normal error like the table did not contain the requested row sqica sqlerrm sqlerrme will contain a string that describes the error The string ends with the line number in the source file List of errors that can occur 12 Out of memory in line d Does not normally occur This is a sign that your virtual memory is exhausted 200 Unsupported type s on line d Does not normally occur This is a sign that the preprocessor has generated something that the library does not know about Perhaps you are running incompatible versions of the preprocessor and the library 201
224. lso be an abbreviation of the option name defined in backend utils misc trace c Refer to The Administrator s Guide chapter on runtime options for a complete list of currently supported options Some of the existing code using private variables and option switches has been changed to make use of the pg_options feature mainly in postgres c It would be advisable to modify all existing code in this way so that we can get rid of many of the switches on the Postgres command line and can have more tunable options with a unique place to put option values 199 Chapter 24 Genetic Query Optimization Author Written by Martin Utesch utesch aut tu freiberg de for the Institute of Automatic Control at the University of Mining and Technology in Freiberg Germany Query Handling as a Complex Optimization Problem Among all relational operators the most difficult one to process and optimize is the join The number of alternative plans to answer a query grows exponentially with the number of joins included in it Further optimization effort is caused by the support of a variety of join methods e g nested loop index scan merge join in Postgres to process individual joins and a diversity of indices e g r tree b tree hash in Postgres as access paths for relations The current Postgres optimizer implementation performs a near exhaustive search over the space of alternative strategies This query optimization technique is inadequate to support
225. lting from the rules query DELETE FROM software WHERE computer manufacurer bim AND software hostname computer hostname In any of these cases the extra queries from the rule system will be more or less independent from the number of affected rows in a query 50 Chapter 8 The Postgres Rule System Another situation are cases on UPDATE where it depends on the change of an attribute if an action should be performed or not In Postgres version 6 4 the attribute specification for rule events is disabled it will have it s comeback latest in 6 5 maybe earlier stay tuned So for now the only way to create a rule as in the shoelace_log example is to do it with a rule qualification That results in an extra query that is performed allways even if the attribute of interest cannot change at all because it does not appear in the targetlist of the initial query When this is enabled again it will be one more advantage of rules over triggers Optimization of a trigger must fail by definition in this case because the fact that it s actions will only be done when a specific attribute is updated is hidden in it s functionality The definition of a trigger only allows to specify it on row level so whenever a row is touched the trigger must be called to make it s decision The rule system will know it by looking up the targetlist and will suppress the additional query completely if the attribute isn t touched So the rule qualified or not
226. ly loads the driver using the Class forName method For Postgres you would use Class forName postgresql Driver This will load the driver and while loading the driver will automatically register itself with JDBC Note The forName method can throw a ClassNotFoundException so you will need to catch it if the driver is not available This is the most common method to use but restricts your code to use just Postgres If your code may access another database in the future and you don t use our extensions then the second method is advisable The second method passes the driver as a parameter to the JVM as it starts using the D argument Example o java Djdbc drivers postgresql Driver example ImageViewer 185 Chapter 21 JDBC Interface In this example the JVM will attempt to load the driver as part of it s initialisation Once done the Image Viewer is started Now this method is the better one to use because it allows your code to be used with other databases without recompiling the code The only thing that would also change is the URL which is covered next One last thing When your code then tries to open a Connection and you get a No driver available SQLException being thrown this is probably caused by the driver not being in the classpath or the value in the parameter not being correct Connecting to the Database With JDBC a database is represented by a URL Uniform Resource Locator Wit
227. ly those that have to do with access methods that do not The relationships between pg_am pg_amop pg_amproc pg_operator and pg_opclass are particularly hard to understand and will be described in depth in the section on interfacing types and operators to indices after we have discussed basic extensions 10 Chapter 4 Extending SQL Functions As it turns out part of defining a new type is the definition of functions that describe its behavior Consequently while it is possible to define a new function without defining a new type the reverse is not true We therefore describe how to add new functions to Postgres before describing how to add new types Postgres SQL provides two types of functions query language functions functions written in SQL and programming language functions functions written in a compiled programming language such as C Either kind of function can take a base type a composite type or some combination as arguments parameters In addition both kinds of functions can return a base type or a composite type It s easier to define SQL functions so we ll start with those Examples in this section can also be found in funcs sql and funcs c Query Language SQL Functions SQL Functions on Base Types The simplest possible SQL function has no arguments and simply returns a base type such as int4 CREATE FUNCTION one RETURNS int4 AS SELECT 1 as RESULT LANGUAGE sql SELECT one AS answer
228. mber of strategies for this access method see below amsupport number of support routines for this access method see below MA procedure identifiers for interface routines to the access method For example regproc ids for opening closing and getting instances from the access method appear here The object ID of the instance in pg_am is used as a foreign key in lots of other classes You don t need to add a new instance to this class all you re interested in is the object ID of the access method instance you want to extend 52 Chapter 9 Interfacing Extensions To Indices SELECT oid FROM pg_am WHERE amname btree quc oid qot 403 Hesse We will use that SELECT in a WHERE clause later The amstrategies attribute exists to standardize comparisons across data types For example B trees impose a strict ordering on keys lesser to greater Since Postgres allows the user to define operators Postgres cannot look at the name of an operator eg gt or lt and tell what kind of comparison it is In fact some access methods don t impose any ordering at all For example R trees express a rectangle containment relationship whereas a hashed data structure expresses only bitwise similarity based on the value of a hash function Postgres needs some consistent way of taking a qualification in your query looking at the operator and then deciding if a usable index exists This implies that Postgres needs to know fo
229. me 1 int terseOutput 0 int width 0 GetLine int PgDatabase GetLine char string int length PutLine void PgDatabase PutLine const char string OidStatus const char PgDatabase OidStatus EndCopy int PgDatabase EndCopy 143 Chapter 17 libpg C Binding Asynchronous Notification Postgres supports asynchronous notification via the LISTEN and NOTIFY commands A backend registers its interest in a particular semaphore with the LISTEN command All backends that are listening on a particular named semaphore will be notified asynchronously when a NOTIFY of that name is executed by another backend No additional information is passed from the notifier to the listener Thus typically any actual data that needs to be communicated is transferred through the relation Note In the past the documentation has associated the names used for asyncronous notification with relations or classes However there is in fact no direct linkage of the two concepts in the implementation and the named semaphore in fact does not need to have a corresponding relation previously defined libpg applications are notified whenever a connected backend has received an asynchronous notification However the communication from the backend to the frontend is not asynchronous The libpq application must poll the backend to see if there is any pending notification information After the execution of a query a frontend may call PgDatabas
230. memory in memory contexts in such way that allocations made in one context may be freed by context destruction without affecting allocations made in other contexts All allocations via palloc etc are made in the context which are chosen as current one You ll get unpredictable results if you ll try to free or reallocate memory allocated not in current context Creation and switching between memory contexts are subject of SPI manager memory management SPI procedures deal with two memory contexts upper Executor memory context and procedure memory context if connected Before a procedure is connected to the SPI manager current memory context is upper Executor context so all allocation made by the procedure itself via palloc repalloc or by SPI utility functions before connecting to SPI are made in this context 106 Chapter 14 Server Programming Interface After SPI_connect is called current context is the procedure s one All allocations made via palloc repalloc or by SPI utility functions except for SPI_copytuple SPI_modifytuple SPI_palloc and SPI_repalloc are made in this context When a procedure disconnects from the SPI manager via SPI_finish the current context is restored to the upper Executor context and all allocations made in the procedure memory context are freed and can t be used any more If you want to return something to the upper Executor then you have to allocate memory for this in the upper context SP
231. ming Language Functions on Base Types eee 13 Programming Language Functions on Composite Types sees 15 CAV CALS P E 16 5 Extending SQL Types eerie eee eee En ATENa stata sins ta tns on ens in statu A Eaa 18 User Defined Types tee at ni POE RR EO UE ERE 18 Functions Needed for a User Defined Type see 18 Large Objects 2 soe EHE O NI CRINE R EP EIO 19 6 Extending SQL Operators 4 eere eese reser esee ee ette esent ns tone tane ta aeta sense ease ense ease enata 20 Operator Optimization Information sseeseseeeeeeeeeenen eene ener 21 COMMUTATOR nenita eti dete etapas 21 NEGATOR3 o tf FEED 22 NRAN A pete n PC 22 JOIN uix eB ensem dead noti 23 Nil M 23 SORTI and SORT2 ica na Re ren RD arb ep me Ke 24 7 Extending SQL Aggregates eee e eee esee eene testen tn stata sins tn tasas totns ta tasas tatnen san 26 8 The Postgres Rule System eerie e eese eese esee esee isses ss soio ro toas tasto aetas etas e ease ease tasto 28 Whatis a Querytree en altes oer p pen eee ee Op e Ee ates E 28 The Parts of a Querytree eee pte erit nr e tries 28 Views and the Rule System ssssseeeseeseeeer eere ener tenerent nennen 30 Implementation of Views in Postgres eseeeeeeeeeeeeene enne 30 Ho
232. n nonzero otherwise int PQendcopy PGconn conn As an example PQexec conn create table foo a int4 b char 16 d float8 PQexec conn copy foo from stdin PQputline conn 3 thello world t4 5 n PQputline conn 4 tgoodbye world t7 11 n PQputline conn n PQendcopy conn When using PQgetResult the application should respond to a PGRES COPY OUT result by executing PQgetline repeatedly followed by PQendcopy after the terminator line is seen It should then return to the PQgetResult loop until PQgetResult returns NULL Similarly a PGRES_COPY_IN result is processed by a series of PQputline calls followed by PQendcopy then return to the PQgetResult loop This arrangement will ensure that a copy in or copy out 128 Chapter 16 libpq command embedded in a series of SQL commands will be executed correctly Older applications are likely to submit a copy in or copy out via PQexec and assume that the transaction is done after PQendcopy This will work correctly only if the copy in out is the only SQL command in the query string libpq Tracing Functions PQtrace Enable tracing of the frontend backend communication to a debugging file stream void PQtrace PGconn conn FILE debug port PQuntrace Disable tracing started by PQtrace void PQuntrace PGconn conn libpq Control Functions PQsetNoticeProcessor Control reporting of notice and warning messages generated by libpq void PQsetNoticeProcessor PGco
233. n attribute class and class attribute interchangably this is the same as SELECT EMP name AS youngster FROM EMP WHERE EMP age 30 SELECT name EMP AS youngster FROM EMP WHERE age EMP 30 youngster Sam As we shall see however this is not always the case This function notation is important when we want to use a function that returns a single instance We do this by assembling the entire instance within the function attribute by attribute This is an example of a function that returns a single EMP instance CREATE FUNCTION new_emp RETURNS EMP AS SELECT None text AS name 1000 AS salary 25 AS age N 2 2 N point AS cubicle LANGUAGE sql In this case we have specified each of the attributes with a constant value but any computation or expression could have been substituted for these constants Defining a function like this can be tricky Some of the more important caveats are as follows The target list order must be exactly the same as that in which the attributes appear in the CREATE TABLE statement or when you execute a query You must typecast the expressions using very carefully or you will see the following error 12 Chapter 4 Extending SOL Functions WARN function declared to return type EMP does not retrieve EMP When calling a function that returns an instance we cannot retrieve the entire instance We m
234. n be used by EXIT statements of nested loops to specify which level of nesting should be terminated lt lt label gt gt WHILE expression LOOP statements END LOOP A conditional loop that is executed as long as the evaluation of expression is true lt lt label gt gt FOR name IN REVERSE expression expression LOOP statements END LOOP 67 Chapter 11 Procedural Languages A loop that iterates over a range of integer values The variable name is automatically created as type integer and exists only inside the loop The two expressions giving the lower and upper bound of the range are evaluated only when entering the loop The iteration step is always 1 lt lt label gt gt FOR record row IN select_clause LOOP statements END LOOP The record or row is assigned all the rows resulting from the select clause and the statements executed for each If the loop is terminated with an EXIT statement the last assigned row is still accessible after the loop EXIT label WHEN expression If no label given the innermost loop is terminated and the statement following END LOOP is executed next If label is given it must be the label of the current or an upper level of nested loop blocks Then the named loop or block is terminated and control continues with the statement after the loops blocks corresponding END Trigger Procedures PL pgSQL can be used to define trigger procedures They are created with th
235. n error dialog box see the debugging section below The Ready message will appear in the lower left corner of the data window This indicates that you can now enter queries 180 Chapter 20 ODBC Interface 4 Selecta table from Query gt Choose tables and then select Query gt Query to access the database The first 50 or so rows from the table should appear Common Problems The following messages can appear while trying to make an ODBC connection through Applix Data Cannot launch gateway on server elfodbc can t find libodbc so Check your axnet cnf Error from ODBC Gateway IM003 ODBC Driver Manager Specified driver could not be loaded libodbc so cannot find the driver listed in odbc ini Verify the settings Server Broken Pipe The driver process has terminated due to some other problem You might not have an up to date version of the Postgres ODBC package setuid to 256 failed to launch gateway The September release of ApplixWare v4 4 1 the first release with official ODBC support under Linux shows problems when usernames exceed eight 8 characters in length Problem description ontributed by Steve Campbell mailto scampbell lear com Author Contributed by Steve Campbell mailto scampbell lear com on 1998 10 20 The axnet program s security system seems a little suspect axnet does things on behalf of the user and on a true multiple user system it really should be run with root security so it c
236. n file Here is a CVSup configuration file modified for a specific installation and which maintains a full local CVS repository This file represents the standard CVSup distribution file for the PostgreSQL ORDBMS project Modified by lockhart alumni caltech edu 1997 08 28 Point to my local snapshot source tree Pull the full CVS repository not just the latest snapshot H HE HE HE FE FE Defaults that apply to all the collections default host postgresql org default compress default release cvs default delete use rel suffix enable the following line to get the latest snapshot default tag enable the following line to get whatever was specified above or by default at the date specified below default date 97 08 29 00 00 00 base directory points to where CVSup will store its bookmarks file s will create subdirectory sup default base opt postgres usr local pgsql default base home cvs prefix directory points to where CVSup will store the actual distribution s default prefix home cvs complete distribution including all below pgsql individual distributions vs the whole thing pgsql doc pgsql perl5 pgsql sre 229 Appendix DG1 The CVS Repository The following is a suggested CVSup config file from the Postgres ftp site ftp ftp postgresql org pub CVSup README cvsup which will fetch the current snapshot only H This file represents the sta
237. ncluding the source for the unknown module that must get installed initially 75 Chapter 12 Linking Dynamically Loaded Functions After you have created and registered a user defined function your work is essentially done Postgres however must load the object code e g a o file or a shared library that implements your function As previously mentioned Postgres loads your code at runtime as required In order to allow your code to be dynamically loaded you may have to compile and link edit it in a special way This section briefly describes how to perform the compilation and link editing required before you can load your user defined functions into a running Postgres server Note that this process has changed as of Version 4 2 Tip The old Postgres dynamic loading mechanism required in depth knowledge in terms of executable format placement and alignment of executable instructions within memory etc on the part of the person writing the dynamic loader Such loaders tended to be slow and buggy As of Version 4 2 the Postgres dynamic loading mechanism has been rewritten to use the dynamic loading mechanism provided by the operating system This approach is generally faster more reliable and more portable than our previous dynamic loading mechanism The reason for this is that nearly all modern versions of UNIX use a dynamic loading mechanism to implement shared libraries and must therefore provide a fast and reliable mechanism On t
238. nd NULL if this is for an INSERT or a DELETE This is what you are to return to Executor if UPDATE and you don t want to replace this tuple with another Skip the operation tg trigger one or is pointer to structure Trigger defined in src include utils rel h typedef struct Trigger char tgname Oid tgfoid func ptr tgfunc int16 tgtype int16 tgnargs int16 tgattr 8 char tgargs Trigger tgname is the trigger s name tgnargs is number of arguments tgargs tgargs is an array of pointers to the arguments specified in CREATE TRIGGER statement Other members are for internal use only Visibility of Data Changes in the Postgres data changes visibility rule during a query execution data changes made by the query itself via SQL function SPI function triggers are invisible to the query scan For example in query INSERT INTO a SELECT FROM a tuples inserted are invisible for SELECT scan In effect this duplicates the database table within itself subject to unique index rules of course without recursing But keep in mind this notice about visibility in the SPI documentation Changes made by query Q are visible by queries which are started after query Q no matter whether they are started inside Q during execution of Q or after Q is done the This is true for triggers as well so though a tuple being inserted tg trigtuple is not visible to queries in a BEFORE trigger this tuple ju
239. nd executes a given chunk of code for each tuple in the result The queryString must be a SELECT statement Anything else returns an error The arrayVar variable is an array name used in the loop For each tuple array Var is filled in with the tuple field values using the field names as the array indexes Then the queryProcedure is executed Usage This would work if table table has fields control and name and perhaps other fields pg_select pgconn SELECT from table array puts format 5d s array control array name 154 Chapter 18 pgtcl pg_listen Name pg_listen sets or changes a callback for asynchronous NOTIFY messages Synopsis pg listen dbHandle notifyName callbackCommand Inputs dbHandle Specifies a valid database handle notifyName Specifies the notify condition name to start or stop listening to callbackCommand If present and not empty provides the command string to execute when a matching notification arrives Outputs None Description pg listen creates changes or cancels a request to listen for asynchronous NOTIFY messages from the Postgres backend With a callbackCommand parameter the request is established or the command string of an already existing request is replaced With no callbackCommand parameter a prior request is canceled After a pg listen request is established the specified command string is executed whenever a NOTIFY message bearing the g
240. ndard CVSup distribution file H for the PostgreSQL ORDBMS project Defaults that apply to all the collections default host postgresql org default compress default release cvs default delete use rel suffix default tag base directory points to where CVSup will store its bookmarks file s default base usr local pgsql prefix directory points to where CVSup will store the actual distribution s default prefix usr local pgsql complete distribution including all below pgsql individual distributions vs the whole thing pgsql doc pgsql perl5 pgsql src Installing CVSup CVSup is available as source pre built binaries or Linux RPMs It is far easier to use a binary than to build from source primarily because the very capable but voluminous Modula 3 compiler is required for the build CVSup Installation from Binaries You can use pre built binaries if you have a platform for which binaries are posted on the Postgres ftp site ftp postgresql org pub or if you are running FreeBSD for which CVSup is available as a port Note CVSup was originally developed as a tool for distributing the FreeBSD source tree It is available as a port and for those running FreeBSD if this is not sufficient to tell how to obtain and install it then please contribute a procedure here At the time of writing binaries are available for Alpha Tru64 ix86 xBSD HPPA HPUX 10 20 MIPS irix ix86 li
241. nerations of search points are found that show a higher average fitness than their ancestors According to the comp ai genetic FAQ it cannot be stressed too strongly that a GA is not a pure random search for a solution to a problem A GA uses stochastic processes but the result is distinctly non random better than random Structured Diagram of a GA 200 Chapter 24 Genetic Query Optimization in Database Systems P t generation of ancestors at a time t P t generation of descendants at a time t gt gt gt gt gt gt gt gt gt gt gt Algorithm GA 4 INITIALIZE t 0 A 3 INITIALIZE P t SH 3y evalute FITNESS of P t while not STOPPING CRITERION do AL Sasa AS A SS SA P t RECOMBINATION P t Het A A See Se Sele SS P t MUTATION P t Poo is e S A ce P t 1 SELECTION P t P t PA a A A evalute FITNESS of P t A A Y mci acc iae t tue XI S 4 3 Genetic Query Optimization GEQO in Postgres The GEQO module is intended for the solution of the query optimization problem similar to a traveling salesman problem TSP Possible query plans are encoded as integer strings Each string represents the join order from one relation of the query to the
242. ng ways dbname server port as connection name user user name tcp postgresql server port dbname as connection name user user name unix postgresql server port dbname as connection name user user name character variable as connection name user user name character string as connection name user default user There are also different ways to specify the user name userid userid password userid identified by password userid using password Finally the userid and the password Each may be a constant text a character variable or a chararcter string 171 Chapter 19 ecpg Embedded SOL in C Disconnect statements A disconnect statement looks loke exec sql disconnect connection target It closes the connection to the specified database The connection target can be specified in the following ways connection name default current all Open cursor statement An open cursor statement looks like exec sql open cursor and is ignore and not copied from the output Commit statement A commit statement looks like exec sql commit and is translated on the output to ECPGcommit LINE Rollback statement A rollback statement looks like exec sql rollback and is translated on the output to ECPGrollback LINE Other statements Other SQL statements are other statements that start with exec sql and ends with Everything inbetween is treated as an SQL statement and pa
243. nn conn void noticeProcessor void arg const char message void arg By default libpq prints notice messages from the backend on stderr as well as a few error messages that it generates by itself This behavior can be overridden by supplying a callback function that does something else with the messages The callback function is passed the text of the error message which includes a trailing newline plus a void pointer that is the same one passed to PQsetNoticeProcessor This pointer can be used to access application specific state if needed The default notice processor is simply static void defaultNoticeProcessor void arg const char message fprintf stderr s message To use a special notice processor call PQsetNoticeProcessor just after creation of a new PGconn object User Authentication Functions The frontend backend authentication process is handled by PQconnectdb without any further intervention The authentication method is now determined entirely by the DBA see pga_hba conf 5 The following routines no longer have any effect and should not be used fe_getauthname Returns a pointer to static space containing whatever name the user has authenticated Use of this routine in place of calls to getenv 3 or getpwuid 3 by applications is highly recommended as it is entirely possible that the authenticated user name is not the same as value of the USER environment variable or the user s entry in etc p
244. nononnnnncnncnnnonnn non crono nan nnnnonnnnnnono 168 Installation ich ete eie m et e Ee yx ke I Pe E edocs amen aad 169 For the Developed 169 A O OA 169 Th Preprocessot 4 oet eere ete Re e eo teneas 170 A Complete Example ior Rt Ee te e tulere 173 The Labtaty e RESI ERE erem Obs 173 iv Riese EE 175 Background Ede RR cance ere e e Rep P Ee NO E PLUR 175 Windows Applications pirita Bb tpe Nt EE e cie 175 Writing Applications seen enne ee Eao EEEE Iae tren 175 Unix Installation a RS ANNE ORE SEIEN 176 Building the Driver rh eU E eic e 176 Configuration Files eedem ite EE ree dp RE eden 179 nuu P 180 COME PUTA c EET 180 Common Problems uie ra CR Cre ORO I RE tre Re itera 181 Debugging ApplixWare ODBC Connections oooonccnnocnonnconnonncononannnncnnncnnnnnncnnncnnnno 181 Running the ApplixWare Demo esee nennen nennen nene 182 Useful Mact0S e e aae aia o eo DR e EUR teria beoe 183 Supported Platforms tere eee siria ie 183 184 Building th JDBC Interface ctt itte eere t tre eb pet tie Pee des 184 Compiling the Driver uident e oim e uy eerte 184 Installing the Drive ero ae ek S cpe dete 184 Preparing the Database for JDBC marenia ea E an aE E eene 184 Using the Driver eee i dt etu ee e bei d dues 185 Importing JDBC u edo e Sn eo pea Pa 185 Eoading the Dryer secet ete ette
245. nswer common questions and to allow a user to find those answers on his own without resorting to mailing list support Documentation Roadmap Postgres has four primary documentation formats Plain text for pre installation information HTML for on line browsing and reference Hardcopy for in depth reading and reference man pages for quick reference Table DG2 1 Postgres Documentation Products p gt gt OM COPYRIGHT Copyright notice Installation instructions text from sgml gt rtf gt text 233 Appendix DG2 Documentation There are man pages available for installation as well as a large number of plain text README type files throughout the Postgres source tree The Documentation Project Packaged documentation is available in both HTML and Postscript formats These are available as part of the standard Postgres installation We discuss here working with the documentation sources and generating documentation packages The documentation sources are written using SGML markup of plain text files The purpose of DocBook SGML is to allow an author to specify the structure and content of a technical document using the DocBook DTD and to have a document style define how that content is rendered into a final form e g using Norm Walsh s Modular Style Sheets See Introduction to DocBook http nis www lanl gov rosalia mydocs docbook intro html for a nice quickstart summary of DocBook features DocBook Elements
246. nt information should appear in the Programmer s Guide Currently included in the Programmer s Guide Reference Manual Detailed reference information on command syntax Currently included in the User s Guide Chapter 1 Introduction In addition to this manual set there are other resources to help you with Postgres installation and use man pages The man pages have general information on command syntax FAQs The Frequently Asked Questions FAQ documents address both general issues and some platform specific issues READMEs README files are available for some contributed packages Web Site The Postgres postgresql org web site has some information not appearing in the distribution There is a mhonarc catalog of mailing list traffic which is a rich resource for many topics Mailing Lists The Postgres Questions mailto questions postgresql org mailing list is a good place to have user questions answered Other mailing lists are available consult the web page for details Yourself Postgres is an open source product As such it depends on the user community for ongoing support As you begin to use Postgres you will rely on others for help either through the documentation or through the mailing lists Consider contributing your knowledge back If you learn something which is not in the documentation write it up and contribute it If you add features to the code contribute it Even those without a lot of ex
247. nt to 1 tells libpq to byte swap the value if necessary so that it is delivered as a proper int value for the client machine When result is int is 0 the byte string sent by the backend is returned unmodified args and nargs specify the arguments to be passed to the function typedef struct int len int isint union int ptr int integer u PQArgBlock PQfn always returns a valid PGresult The resultStatus should be checked before the result is used The caller is responsible for freeing the PGresult with PQclear when it is no longer needed Asynchronous Notification Postgres supports asynchronous notification via the LISTEN and NOTIFY commands A backend registers its interest in a particular notification condition with the LISTEN command and can stop listening with the UNLISTEN command All backends listening on a particular condition will be notified asynchronously when a NOTIFY of that condition name is executed by any backend No additional information is passed from the notifier to the listener Thus typically any actual data that needs to be communicated is transferred through a database relation Commonly the condition name is the same as the associated relation but it is not necessary for there to be any associated relation libpq applications submit LISTEN and UNLISTEN commands as ordinary SQL queries Subsequently arrival of NOTIFY messages can be detected by calling PQnotifies PQnotifies Returns the next noti
248. nux libc5 ix86 linux glibc Sparc Solaris and Sparc SunOS 1 Retrieve the binary tar file for cvsup cvsupd is not required to be a client appropriate for your platform a If you are running FreeBSD install the CVSup port 230 b Appendix DG1 The CVS Repository If you have another platform check for and download the appropriate binary from the Postgres ftp site ftp postgresql org pub 2 Check the tar file to verify the contents and directory structure if any For the linux tar file at least the static binary and man page is included without any directory packaging a b If the binary is in the top level of the tar file then simply unpack the tar file into your target directory cd usr local bin tar zxvf usr local src cvsup 16 0 linux i386 tar gz mv cvsup 1 doc man man1 If there is a directory structure in the tar file then unpack the tar file within usr local src and move the binaries into the appropriate location as above 3 Ensure that the new binaries are in your path rehash which cvsup set path path to cvsup path which cvsup usr local bin cvsup Installation from Sources Installing CVSup from sources is not entirely trivial primarily because most systems will need to install a Modula 3 compiler first This compiler is available as Linux RPM FreeBSD package or source code Note A clean source installation of Modula 3 takes roughly 200MB of disk space which s
249. o the Postgres server due to an efficient streaming transfer protocol which only sends the changes since the last update Preparing A CVSup Client System Two directory areas are required for CVSup to do it s job a local CVS repository or simply a directory area if you are fetching a snapshot rather than a repository see below and a local CVSup bookkeeping area These can coexist in the same directory tree Decide where you want to keep your local copy of the CVS repository On one of our systems we recently set up a repository in home cvs but had formerly kept it under a Postgres development tree in opt postgres cvs If you intend to keep your repository in home cvs then put setenv CVSROOT home cvs in your cshrc file or a similar line in your bashrc or profile file depending on your shell The cvs repository area must be initialized Once CVSROOT is set then this can be done with a single command cvs init after which you should see at least a directory named CVSROOT when listing the CVSROOT directory ls CVSROOT CVSROOT Running a CVSup Client Verify that cvsup is in your path on most systems you can do this by typing which cvsup 228 Appendix DG1 The CVS Repository Then simply run cvsup using cvsup L 2 postgres cvsup where L 2 enables some status messages so you can monitor the progress of the update and postgres cvsup is the path and name you have given to your CVSup configuratio
250. o write writes at most len bytes to a large object from a variable buf Usage buf must be the actual string to write not a variable name 160 Chapter 18 pgtcl pg lo Iseek Name pg lo lseek seek to a position in a large object Synopsis pg lo lseek conn fd offset whence Inputs conn Specifies a valid database connection fd File descriptor for the large object from pg lo open offset Specifies a zero based offset in bytes whence whence can be SEEK CUR SEEK END or SEEK SET Outputs None Description pg lo Iseek positions to offset bytes from the beginning of the large object Usage whence can be SEEK CUR SEEK END or SEEK SET 161 Chapter 18 pgtcl pg lo tell Name pg lo tell return the current seek position of a large object Synopsis pg lo tell conn fd Inputs conn Specifies a valid database connection fd File descriptor for the large object from pg lo open Outputs offset A zero based offset in bytes suitable for input to pg lo Iseek Description pg lo tell returns the current to offset in bytes from the beginning of the large object Usage pg lo unlink Name pg lo unlink delete a large object Synopsis pg lo unlink conn lobjld Inputs conn Specifies a valid database connection 162 Chapter 18 pgtcl lobjId Identifier for a large object XXX Is this the same as objOid in other calls thomas 1998 01 11 Outputs None Descr
251. ocation of shared global database files data base Location of local database files The page format may change in the future to provide more efficient access to large objects This section contains insufficient detail to be of any assistance in writing a new access method 224 Appendix DG1 The CVS Repository The Postgres source code is stored and managed using the CVS code management system At least two methods anonymous CVS and CVSup are available to pull the CVS code tree from the Postgres server to your local machine CVS Tree Organization Author Written by Marc G Fournier mailto scrappy hub org on 1998 11 05 The command cvs checkout has a flag r that lets you check out a certain revision of a module This flag makes it easy to for example retrieve the sources that make up release 1 0 of the module tc at any time in the future cvs checkout r REL6 4 tc This is useful for instance if someone claims that there is a bug in that release but you cannot find the bug in the current working copy Tip You can also check out a module as it was at any given date using the D option When you tag more than one file with the same tag you can think about the tag as a curve drawn through a matrix of filename vs revision number Say we have 5 files with the following revisions filel file2 file3 file4 files Tad 1 1 TAT 1 1 1 1 lt TAG 1 2 l 2 1 2 Sal 2S 1 3 1 3 1 3 1 3 1 4 1 4
252. of the capabilities a function writer has in the C language except for some restrictions The good restriction is that everything is executed in a safe Tcl interpreter In addition to the limited command set of safe Tcl only a few commands are available to access the database over SPI and to raise messages via elog There is no way to access internals of the database backend or gaining OS level access under the permissions of the Postgres user ID like in C Thus any unprivileged database user may be permitted to use this language The other internal given restriction is that Tcl procedures cannot be used to create input output functions for new data types The shared object for the PL Tcl call handler is automatically built and installed in the Postgres library directory if the Tcl Tk support is specified in the configuration step of the installation procedure Description Postgres Functions and Tcl Procedure Names In Postgres one and the same function name can be used for different functions as long as the number of arguments or their types differ This would collide with Tcl procedure names To offer the same flexibility in PL Tcl the internal Tcl procedure names contain the object ID of the procedures pg_proc row as part of their name Thus different argtype versions of the same Postgres function are different for Tcl too Defining Functions in PL Tcl To create a function in the PL Tcl language use the known syntax 71
253. og shoelace_log In step 2 the rule qualification is added to it so the result set is restricted to rows where sl_avail changes INSERT INTO shoelace_log SELECT NEW sl name NEW sl avai getpgusername datetime now text FROM shoelace data shoelace data shoelace data NEW shoelace data OLD shoelace log shoelace log WHERE int4ne NEW sl avail OLD sl avail In step 3 the original parsetrees qualification is added restricting the resultset further to only the rows touched by the original parsetree INSERT INTO shoelace log SELECT NEW sl name NEW sl avai getpgusername datetime now text FROM shoelace data shoelace data shoelace data NEW shoelace data OLD shoelace log shoelace log WHERE int4ne NEW sl avail OLD sl avail AND bpchareq shoelace data sl name s17 Step 4 substitutes NEW references by the targetlist entries from the original parsetree or with the matching variable references from the result relation INSERT INTO shoelace log SELECT shoelace data sl name 6 getpgusername datetime now text FROM shoelace data shoelace data shoelace data NEW shoelace data OLD shoelace log shoelace log WHERE int4ne 6 OLD sl avail AND bpchareq shoelace data sl name s17 Step 5 replaces OLD references into resultrelation references INSERT INTO shoelace log SELECT shoelace data sl name 6 getpgusername datetime now text FROM shoelace data shoelace data shoelace data
254. om the others in that they modify the parsetree in place instead of creating a new one So SELECT rules are described first Currently there could be only one action and it must be a SELECT action that is INSTEAD This restriction was required to make rules safe enough to open them for ordinary users and it restricts rules ON SELECT to real view rules The example for this document are two join views that do some calculations and some more views using them in turn One of the two first views is customized later by adding rules for INSERT UPDATE and DELETE operations so that the final result will be a view that behaves like a real table with some magic functionality It is not such a simple example to start from and this makes things harder to get into But it s better to have one example that covers all the points discussed step by step rather than having many different ones that might mix up in mind The database needed to play on the examples is named al bundy You ll see soon why this is the database name And it needs the procedural language PL pgSQL installed because we need a little min function returning the lower of 2 integer values We create that as CREATE FUNCTION min integer integer RETURNS integer AS BEGIN IF 1 2 THEN 30 END RETURN 1 END IF RETURN 2 LANGUAGE plpgsaql Chapter 8 The Postgres Rule System The real tables we need in the first two rule system descripitons are these CREATE T
255. ond the scope of this paper There are many books and documents dealing with lex and yacc You should be familiar with yacc before you start to study the grammar given in gram y otherwise you won t understand what happens there For a better understanding of the data structures used in Postgres for the processing of a query we use an example to illustrate the changes made to these data structures in every stage Example 22 1 A Simple Select This example contains the following simple query that will be used in various descriptions and figures throughout the following sections The query assumes that the tables given in The Supplier Database have already been defined select s sname se pno from supplier s sells se where s sno gt 2 and s sno se sno Figure ref parsetree shows the parse tree built by the grammar rules and actions given in gram y for the query given in A Simple SelectThis example contains the following simple query that will be used in various descriptions and figures throughout the following sections The query assumes that the tables given in The Supplier Database have already been defined select s sname se pno from supplier s sells se where s sno gt 2 and s sno se sno without the operator tree for the where clause which is shown in figure Wref where_clause because there was not enough space to show both data structures in one figure The top node of the tree is a SelectStmt node For every entry appearing
256. onds with an ErrorResponse AuthenticationUnencryptedPassword The frontend must then send an UnencryptedPasswordPacket If this is the correct password the postmaster responds with an AuthenticationOk otherwise it responds with an ErrorResponse AuthenticationEncryptedPassword The frontend must then send an EncryptedPasswordPacket If this is the correct password the postmaster responds with an AuthenticationOk otherwise it responds with an ErrorResponse If the frontend does not support the authentication method requested by the postmaster then it should immediately close the connection After sending AuthenticationOk the postmaster attempts to launch a backend process Since this might fail or the backend might encounter a failure during startup the frontend must wait for the backend to acknowledge successful startup The frontend should send no messages at this point The possible messages from the backend during this phase are BackendKeyData This message is issued after successful backend startup It provides secret key data that the frontend must save if it wants to be able to issue cancel requests later The frontend 204 Query Chapter 25 Frontend Backend Protocol should not respond to this message but should continue listening for a ReadyForQuery message ReadyForQuery Backend startup is successful The frontend may now issue query or function call messages ErrorResponse Backend startup failed The conne
257. orporate user written code into itself through dynamic loading That is the user can specify an object code file e g a compiled o file or shared library that implements a new type or function and Postgres will load it as required Code written in SQL are even more trivial to add to the server This ability to modify its operation on the fly makes Postgres uniquely suited for rapid prototyping of new applications and storage structures The Postgres Type System The Postgres type system can be broken down in several ways Types are divided into base types and composite types Base types are those like int4 that are implemented in a language such as C They generally correspond to what are often known as abstract data types Postgres can only operate on such types through methods provided by the user and only understands the behavior of such types to the extent that the user describes them Composite types are created whenever the user creates a class EMP is an example of a composite type Postgres stores these types in only one way within the file that stores all instances of the class but the user can look inside at the attributes of these types from the query language and optimize their retrieval by for example defining indices on the attributes Postgres base types are further divided into built in types and user defined types Built in types like int4 are those that are compiled into the system User defined types are those create
258. param and enable foo trace by writing values into the data pg options file file pg options foo 1 fooparam 17 The new options will be read by all new backends when they are started To make effective the changes for all running backends we need to send a SIGHUP to the postmaster The signal will be automatically sent to all the backends We can also activate the changes only for a specific backend by sending the SIGHUP directly to it pg options can also be specified with the T switch of Postgres postgres options T verbose 2 query hostlookup The functions used for printing errors and debug messages can now make use of the syslog 2 facility Message printed to stdout or stderr are prefixed by a timestamp containing also the backend pid Htimestamp Hpid Hmessage 980127 17 52 14 173 29271 StartTransactionCommand 980127 17 52 14 174 29271 ProcessUtility drop table t 980127 17 52 14 186 29271 SIIncNumEntries table is 70 full 980127 17 52 14 186 29286 Async NotifyHandler 980127 17 52 14 186 29286 Waking up sleeping backend process 980127 19 52 14 292 29286 Async NotifyFrontEnd 980127 19 52 14 413 29286 Async NotifyFrontEnd done 980127 19 52 14 466 29286 Async NotifyHandler done This format improves readability of the logs and allows people to understand exactly which backend is doing what and at which time It also makes easier to write simple awk or perl scripts which monitor the log to detect database errors or
259. part and the application part is run in the same process In later versions of oracle this is no longer supported This would require a total redesign of the Postgres access model and that effort can not justify the performance gained Porting From Other RDBMS Packages The design of ecpg follows SQL standard So porting from a standard RDBMS should not be a problem Unfortunately there is no such thing as a standard RDBMS So ecpg also tries to understand syntax additions as long as they do not create conflicts with the standard 168 Chapter 19 ecpg Embedded SOL in C The following list shows all the known incompatibilities If you find one not listed please notify Michael Meskes mailto meskes O postgresql org Note however that we list only incompatibilities from a precompiler of another RDBMS to ecpg and not additional ecpg features that these RDBMS do not have Syntax of FETCH command The standard syntax of the FETCH command is FETCH direction amount INIFROM cursor name ORACLE however does not use the keywords IN resp FROM This feature cannot be added since it would create parsing conflicts Installation Since version 0 5 ecpg is distributed together with Postgres So you should get your precompiler libraries and header files compiled and installed by default as a part of your installation For the Developer This section is for those who want to develop the ecpg interface It describes how the things work
260. perience can provide corrections and minor changes in the documentation and that is a good way to start The Postgres Documentation mailto docs postgresql org mailing list is the place to get going Terminology In the following documentation site may be interpreted as the host machine on which Postgres is installed Since it is possible to install more than one set of Postgres databases on a single host this term more precisely denotes any particular set of installed Postgres binaries and databases The Postgres superuser is the user named postgres who owns the Postgres binaries and database files As the database superuser all protection mechanisms may be bypassed and any data accessed arbitrarily In addition the Postgres superuser is allowed to execute some support programs which are generally not available to all users Note that the Postgres superuser is not the same as the Unix superuser which will be referred to as root The superuser should have a non zero user identifier UID for security reasons Chapter 1 Introduction The database administrator or DBA is the person who is responsible for installing Postgres with mechanisms to enforce a security policy for a site The DBA can add new users by the method described below and maintain a set of template databases for use by createdb The postmaster is the process that acts as a clearing house for requests to the Postgres system Frontend applications connect to the postmaster
261. pgport pgoptions pgtty char dbName int nFields int il Ji FILE debug PGconn conn PGresult res begin by setting the parameters for a backend connection if the parameters are null then the system will try to use reasonable defaults by looking up environment variables or failing that using hardwired constants xf pghost NULL host name of the backend server pgport NULL port of the backend server pgoptions NULL special options to start up the backend server pgtty NULL debugging tty for the backend server 5 dbName templatel make a connection to the database conn PQsetdb pghost pgport pgoptions pgtty dbName check to see that the backend connection was successfully made if POstatus conn CONNECTION BAD 131 Chapter 16 libpq fprintf stderr Connection to database s failed n dbName fprintf stderr s PQerrorMessage conn exit nicely conn debug fopen tmp trace out w PQtrace conn debug start a transaction block res PQexec conn BEGIN if res POresultStatus res PGRES COMMAND OK fprintf stderr BEGIN command failed n Poclear res exit nicely conn should PQclear PGresult whenever it is no longer needed to avoid memory leaks Poclear res fetch instances from the pg database the system catalog of databases
262. points to storage that is part of the PGresult structure One should not modify it and one must explicitly copy the value into other storage if it is to be used past the lifetime of the PGresult structure itself BinaryTuples is not yet implemented GetLength Returns the length of a field attribute in bytes Tuple and field indices start at 0 142 Chapter 17 libpg C Binding int PgDatabase GetLength int tup num int field num This is the actual data length for the particular data value that is the size of the object pointed to by GetValue Note that for ASCII represented values this size has little to do with the binary size reported by POfsize GetLength Returns the length of a field attribute in bytes Tuple and field indices start at O int PgDatabase GetLength int tup num const char field name This is the actual data length for the particular data value that is the size of the object pointed to by GetValue Note that for ASCII represented values this size has little to do with the binary size reported by POfsize DisplayTuples Prints out all the tuples and optionally the attribute names to the specified output stream void PgDatabase DisplayTuples FILE out 0 int fillAlign 1 D const char fieldSep int printHeader 1 int quiet 0 PrintTuples Prints out all the tuples and optionally the attribute names to the specified output stream void PgDatabase PrintTuples FILE out 0 int printAttNa
263. postmaster LimString64 Unused LimString64 The optional tty the backend should use for debugging messages Terminate F Bytel X Identifies the message as a termination UnencryptedPasswordPacket F Int32 The size of the packet in bytes String The unencrypted password 216 Chapter 26 Postgres Signals Note Contributed by Massimo Dal Zotto mailto dz Wcs unitn it Postgres uses the following signals for communication between the postmaster and backends Table 26 1 Postgres Signals postmaster Action SIGHUP kill sighup SIGQUIT kill sigterm SIGTERM kill sigterm kill 9 die SIGPIPE ignored SIGUSR2 kill sigusr2 async notify SI flush SIGCHLD reaper ignored alive test SIGTTIN ignored SIGCONT dumpstatus SIGFPE FloatExceptionHandler Note kill signal means sending a signal to all backends The main changes to the old signal handling are the use of SIGQUIT instead of SIGHUP to handle warns SIGHUP to re read the pg options file and the redirection to all active backends of SIGHUP SIGTERM SIGUSR1 and SIGUSR2 sent to the postmaster In this way these signals sent to the postmaster can be sent automatically to all the backends without need to know their pids To shut down postgres one needs only to send a SIGTERM to postmaster and it will stop automatically all the backends The SIGUSR2 signal is also used to prevent SI cache table overflow which happens when some backen
264. problem or to compute transaction time statistics Messages printed to syslog use the log facility LOG LOCALO The use of syslog can be controlled with the syslog pg option Unfortunately many functions call directly printf to print their messages to stdout or stderr and this output can t be redirected to syslog or have timestamps in it It would be advisable that all calls to printf would be replaced with the PRINTF macro and output to stderr be changed to use EPRINTF instead so that we can control all output in a uniform way 198 Chapter 23 pg_options The new pg_options mechanism is more convenient than defining new backend option switches because we don t have to define a different switch for each thing we want to control All options are defined as keywords in an external file stored in the data directory we don t have to restart Postgres to change the setting of some option Normally backend options are specified to the postmaster and passed to each backend when it is started Now they are read from a file we can change options on the fly while a backend is running We can thus investigate some problem by activating debug messages only when the problem appears We can also try different values for tunable parameters The format of the pg_options file is as follows comment option integer value set value for option option set option 1 option set option 1 option set option 0 Note that keyword can a
265. psqlodbc so Database DatabaseName Servername localhost Port 5432 Tip Remember that the Postgres database name is usually a single word without path names of any sort The Postgres server manages the actual access to the database and you need only specify the name from the client Other entries may be inserted to control the format of the display The third required section is ODBC which must contain the InstallDir keyword and which may contain other options Here is an example odbc ini file showing access information for three databases ODBC Data Sources DataEntry Read Write Database QueryOnly Read only Database Test Debugging Database Default Postgres Stripped DataEntry ReadOnly 0 Servername localhost Database Sales QueryOnly ReadOnly 1 Servername localhost Database Sales Test Debug 1 CommLog 1 ReadOnly 0 Servername localhost Username tgl Password no way Port 5432 Database test Default Servername localhost Database tgl Driver opt postgres current lib libpsqlodbc so 179 Chapter 20 ODBC Interface ODBC InstallDir opt applix axdata axshlib ApplixWare Configuration ApplixWare must be configured correctly in order for it to be able to access the Postgres ODBC software drivers Enabling ApplixWare Database Access These instructions are for the 4 4 1 release of ApplixWare on Linux Refer to the Linux Sys Admin on line book
266. ptimizer and Executor SPI also does some memory management To avoid misunderstanding we ll use function to mean SPI interface functions and procedure for user defined C functions using SPI SPI procedures are always called by some upper Executor and the SPI manager uses the Executor to run your queries Other procedures may be called by the Executor running queries from your procedure Note that if during execution of a query from a procedure the transaction is aborted then control will not be returned to your procedure Rather all work will be rolled back and the server will wait for the next command from the client This will be changed in future versions Other restrictions are the inability to execute BEGIN END and ABORT transaction control statements and cursor operations This will also be changed in the future If successful SPI functions return a non negative result either via a returned integer value or in SPI_result global variable as described below On error a negative or NULL result will be returned 85 Chapter 14 Server Programming Interface Interface Functions SPI_connect Name SPI connect Connects your procedure to the SPI manager Synopsis int SPI connect void Inputs None Outputs int Return status SPI OK CONNECT if connected SPI ERROR CONNECT if not connected Description SPI connect opens a connection to the Postgres backend You should call this function if you will need
267. pty result was returned NoticeResponse B Bytel N Identifies the message as a notice String The notice message itself NotificationResponse B Bytel A Identifies the message as a notification response Int32 The process ID of the notifying backend process String The name of the condition that the notify has been raised on Query F 214 Chapter 25 Frontend Backend Protocol Bytel Q Identifies the message as a query String The query string itself ReadyForQuery B Bytel Z Identifies the message type ReadyForQuery is sent whenever the backend is ready for a new query cycle RowDescription B BytelC T Identifies the message as a row description Int16 Specifies the number of fields in a row may be zero Then for each field there is the following String Specifies the field name Int32 Specifies the object ID of the field type Int16 Specifies the type size Int32 Specifies the type modifier StartupPacket F Int32 296 The size of the packet in bytes Int32 The protocol version number The most significant 16 bits are the major version number The least 16 significant bits are the minor version number 215 Chapter 25 Frontend Backend Protocol LimString64 The database name defaults to the user name if empty LimString32 The user name LimString64 Any additional command line arguments to be passed to the backend by the
268. r Norm Walsh s Modular Style Sheets if other stylesheets are used then one can define HDSL and PDSL as the full path and file name for the stylesheet as is done above for HSTYLE and PSTYLE On many systems these stylesheets will be found in packages installed in usr lib sgml usr share lib sgml or usr local lib sgml HTML documentation packages can be generated from the SGML source by typing cd doc src make tutorial tar gz make user tar gz make admin tar gz make programmer tar gz make postgres tar gz make install AP o o oP o o oo These packages can be installed from the main documentation directory by typing cd doc make install oe oe Hardcopy Generation for v6 5 The hardcopy Postscript documentation is generated by converting the SGML source code to RTF then importing into ApplixWare 4 4 1 After a little cleanup see the following section the output is printed to a postscript file 234 Appendix DG2 Documentation RTF Cleanup Procedure Several items must be addressed in generating Postscript hardcopy Applixware RTF Cleanup Applixware does not seem to do a complete job of importing RTF generated by jade MSS In particular all text is given the Headerl style attribute label although the text formatting itself is acceptable Also the Table of Contents page numbers do not refer to the section listed in the table but rather refer to the page of the ToC itself 1 Generate the RTF input by typing
269. r example if your datatype is a structure in which there may be uninteresting pad bits it s unsafe to mark the equality operator HASHES Unless perhaps you write your other operators to ensure that the unused bits are always zero Another example is that the FLOAT datatypes are unsafe for hash joins On machines that meet the IEEE floating point standard minus zero and plus zero are different values different bit patterns but they are defined to compare equal So if float equality were marked HASHES a minus zero and a plus zero would probably not be matched up by a hash join but they would be matched up by any other join process The bottom line is that you should probably only use HASHES for equality operators that are or could be implemented by memcmp and SORT2 The SORT clauses if present tell the system that it is permissible to use the merge join method for a join based on the current operator Both must be specified if either is The current operator must be equality for some pair of data types and the SORT and SORT2 clauses name the ordering operator lt operator for the left and right side data types respectively Merge join is based on the idea of sorting the left and righthand tables into order and then scanning them in parallel So both data types must be capable of being fully ordered and the join operator must be one that can only succeed for pairs of values that fall at the same place in the sort order
270. r example that the lt and gt operators partition a B tree Postgres uses strategies to express these relationships between operators and the way they can be used to scan indices Defining a new set of strategies is beyond the scope of this discussion but we ll explain how B tree strategies work because you ll need to know that to add a new operator class In the pg am class the amstrategies attribute is the number of strategies defined for this access method For B trees this number is 5 These strategies correspond to Table 9 2 B tree Strategies mn mi The idea is that you ll need to add procedures corresponding to the comparisons above to the pg amop relation see below The access method code can use these strategy numbers regardless of data type to figure out how to partition the B tree compute selectivity and so on Don t worry about the details of adding procedures yet just understand that there must be a set of these procedures for int2 int4 oid and every other data type on which a B tree can operate Sometimes strategies aren t enough information for the system to figure out how to use an index Some access methods require other support routines in order to work For example the B tree access method must be able to compare two keys and determine whether one is greater than equal to or less than the other Similarly the R tree access method must be able to compute intersections unions and sizes of rectang
271. rdossnsensesssoasscsscinsodsoconsersendesvotuasedecinesovesseaseonsserontsseaseessesssessonsesensesses 197 24 Genetic Query Optimization in Database Systems e eeeeeee sees eese eese eene tn stanno 200 Query Handling as a Complex Optimization Problem eee 200 Genetic Algorithms GA enere eorr eterne nete enero tein Pe He ee Eae Yn terio re EEEE o 200 Genetic Query Optimization GEQO in Postgres esee 201 Future Implementation Tasks for Postgres GEQO sese 202 Basic Mpro Ve Suicidio 202 Improve freeing of memory when query is already processed 202 Improve genetic algorithm parameter settings see 202 Find better solution for integer overflow esee 202 Find solution for exhausted memory eee 202 References nine UU a deber he EAE HIERO Rep es 202 25 Frontend Backend Protocol e eerie eres ee sees en enata enata snas ts tn sn sesta snsns 203 OVeEVIeW ice RD ep eter OI epe beet der en etn ERAI ee pe n 203 Protocol 3 eo eee benemeime pi On pom e eds 203 Kira MP 204 ju iai 205 Function Call oeste EHE eO ORE e EE 206 Notification Responses ue ett Pe D renos gush de E EUR EE PUER FTH Rete p ood 207 Cancelling Requests in Progress aiieieo piierne 207 Termination c other br ERE ree dE bi
272. re the database resides which is generally invisible to the frontend application Obviously it makes no sense to make the path relative to the directory in which the user started the frontend application since the server could be running on a completely different machine The Postgres user must be able to traverse the path given to the create function command and be able to read the object file This is because the Postgres server runs as the Postgres 76 Functions Chapter 12 Linking Dynamically Loaded user not as the user who starts up the frontend process Making the file or a higher level directory unreadable and or unexecutable by the postgres user is an extremely common mistake Symbol names defined within object files must not conflict with each other or with symbols defined in Postgres The GNU C compiler usually does not provide the special options that are required to use the operating system s dynamic loader interface In such cases the C compiler that comes with the operating system must be used ULTRIX It is very easy to build dynamically loaded object files under ULTRIX ULTRIX does not have any shared library mechanism and hence does not place any restrictions on the dynamic loader interface On the other hand we had to re write a non portable dynamic loader ourselves and could not use true shared libraries Under ULTRIX the only restriction is that you must produce each object file with the option G 0 No
273. relation Outputs char The name of the specified relation Description SPI_getrelname returns the name of the specified relation Usage TBD Algorithm Copies the relation name into new storage 103 Chapter 14 Server Programming Interface SPI_palloc Name SPI palloc Allocates memory in upper Executor context Synopsis SPI palloc size Inputs Size size Octet size of storage to allocate Outputs void New storage space of specified size Description SPI_palloc allocates memory in upper Executor context See section on memory management Usage TBD 104 Chapter 14 Server Programming Interface SPI_repalloc Name SPI repalloc Re allocates memory in upper Executor context Synopsis SPI_repalloc pointer size Inputs void pointer Pointer to existing storage Size size Octet size of storage to allocate Outputs void New storage space of specified size with contents copied from existing area Description SPI_repalloc re allocates memory in upper Executor context See section on memory management Usage TBD 105 Chapter 14 Server Programming Interface SPI_pfree Name SPI_pfree Frees memory from upper Executor context Synopsis SPI _pfree pointer Inputs void pointer Pointer to existing storage Outputs None Description SPI_pfree frees memory in upper Executor context See section on memory management Usage TBD Memory Management Server allocates
274. res libpq is a set of library routines that allow client programs to pass queries to the Postgres backend server and to receive the results of these queries libpq is also the underlying engine for several other Postgres application interfaces including libpq C libpgtcl Tcl perl5 and ecpg So some aspects of libpq s behavior will be important to you if you use one of those packages Three short programs are included at the end of this section to show how to write programs that use libpq There are several complete examples of libpq applications in the following directories src test regress sre test examples src bin psql Frontend programs which use libpq must include the header file libpq fe h and must link with the libpq library Database Connection Functions The following routines deal with making a connection to a Postgres backend server The application program can have several backend connections open at one time One reason to do that is to access more than one database Each connection is represented by a PGconn object which is obtained from PQconnectdb or PQsetdbLogin NOTE that these functions will always return a non null object pointer unless perhaps there is too little memory even to allocate the PGconn object The PQstatus function should be called to check whether a connection was successfully made before queries are sent via the connection object PQsetdbLogin Makes a new connection to a backend P
275. rface x 1 lt lt lt no tuples ina 0 1 1 row vac gt insert into a values execq select from a 0 1 NOTICE EXECQ 0 INSERT 167713 1 vac gt select from a x 1 2 lt lt lt there was single tuple ina 1 2 rows This demonstrates data changes visibility rule vac gt insert into a select execq select from a 0 x from a NOTICE EXECQ NOTICE EXECQ NOTICE EXECQ NOTICE EXECQ NOTICE EXECQ INSERT 0 2 vac gt select from a NNERANA x 1 2 2 lt lt lt 2 tuples 1 x in first tuple 6 lt lt lt 3 tuples 2 1 just inserted 2 x in second tuple tuples visible to execq in different invocations 109 Chapter 15 Large Objects In Postgres data values are stored in tuples and individual tuples cannot span data pages Since the size of a data page is 8192 bytes the upper limit on the size of a data value is relatively low To support the storage of larger atomic values Postgres provides a large object interface This interface provides file oriented access to user data that has been declared to be a large type This section describes the implementation and the programmatic and query language interfaces to Postgres large object data Historical Note Originally Postgres 4 2 supported three standard implementations of large objects as files external to Postgres as UNIX files managed by Postgres and as data stored within the Postgres database It
276. rsed for variable substitution Variable substitution occur when a symbol starts with a colon Then a variable with that name is looked for among the variables that were previously declared within a declare section and depending on the variable being for input or output the pointers to the variables are written to the output to allow for access by the function 172 Chapter 19 ecpg Embedded SOL in C For every variable that is part of the SQL request the function gets another ten arguments The type as a special symbol A pointer to the value or a pointer to the pointer The size of the variable if it is a char or varchar Number of elements in the array for array fetches The offset to the next element in the array for array fetches The type of the indicator variable as a special symbol A pointer to the value of the indicator variable or a pointer to the pointer of the indicator variable 0 Number of elements in the indicator array for array fetches The offset to the next element in the indicator array for array fetches A Complete Example Here is a complete example describing the output of the preprocessor of a file foo pgc exec sql begin declare section int index int result exec sql end declare section exec sql select res into result from mytable where index index 1s translated into Processed by ecpg 2 6 0 These two include files are added by the preprocessor include ecpgt
277. ry optimization seems to be a mere fraction of the time Postgres needs for freeing memory via routine MemoryContextFree file backend utils mmgr mcxt c Debugging showed that it get stucked in a loop of routine OrderedElemPop file backend utils mmgr oset c The same problems arise with long queries when using the normal Postgres query optimization algorithm Improve genetic algorithm parameter settings In file backend optimizer geqo geqo_params c routines gimme_pool_size and gimme_number_generations we have to find a compromise for the parameter settings to satisfy two competing demands Optimality of the query plan Computing time Find better solution for integer overflow In file backend optimizer geqo geqo_eval c routine geqo_joinrel_size the present hack for MAXINT overflow is to set the Postgres integer value of rel gt size to its logarithm Modifications of Rel in backend nodes relation h will surely have severe impacts on the whole Postgres implementation Find solution for exhausted memory Memory exhaustion may occur with more than 10 relations involved in a query In file backend optimizer geqo geqo_eval c routine gimme_tree is recursively called Maybe I forgot something to be freed correctly but I dunno what Of course the rel data structure of the join keeps growing and growing the more relations are packed into it Suggestions are welcome References Reference information for GEQ algorithms The Hitch Hiker s
278. s a block comment that extends to the next occurence of Block comments cannot be nested but double dash comments can be enclosed into a block comment and a double dash can hide the block comment delimiters and Declarations All variables rows and records used in a block or it s subblocks must be declared in the declarations section of a block except for the loop variable of a FOR loop iterating over a range of integer values Parameters given to a PL pgSQL function are automatically declared with the usual identifiers n The declarations have the following syntax name CONSTANT type NOT NULL DEFAULT value Declares a variable of the specified base type If the variable is declared as CONSTANT the value cannot be changed If NOT NULL is specified an assignment of a NULL value results in a runtime error Since the default value of all variables is the SQL NULL value all variables declared as NOT NULL must also have a default value specified The default value is evaluated ever time the function is called So assigning now to a variable of type datetime causes the variable to have the time of the actual function call not when the function was precompiled into it s bytecode name class ROWTYPE Declares a row with the structure of the given class Class must be an existing table or viewname of the database The fields of the row are accessed in the dot notation Parameters to a function can
279. s of composite types from C As Postgres processes a set of instances each instance will be passed into your function as an opaque structure of type TUPLE Suppose we want to write a function to answer the query 15 Chapter 4 Extending SOL Functions SELECT name c_overpaid EMP 1500 AS overpaid FROM EMP WHERE name Bill or name Sam In the query above we can define c_overpaid as include postgres h include executor executor h for GetAttributeByName bool c overpaid TupleTableSlot t the current instance of EMP int4 limit bool isnull false int4 salary Salary int4 GetAttributeByName t salary amp isnull if isnull return false return salary limit GetAttributeByName is the Postgres system function that returns attributes out of the current instance It has three arguments the argument of type TUPLE passed into the function the name of the desired attribute and a return parameter that describes whether the attribute is null GetAttributeByName will align data properly so you can cast its return value to the desired type For example if you have an attribute name which is of the type name the GetAttributeByName call would look like char str str char GetAttributeByName t name amp isnull The following query lets Postgres know about the c overpaid function CREATE FUNCTION c overpaid EMP int4 RETURNS bool AS PGROOT tutorial obj funcs so LANG
280. should be referenced by another name inside a trigger procedure Data Types The type of a varible can be any of the existing basetypes of the database type in the declarations section above is defined as Postgres basetype variable TYPE class field TYPE variable is the name of a variable previously declared in the same function that is visible at this point class is the name of an existing table or view where field is the name of an attribute Using the class field TYPE causes PL pgSQL to lookup the attributes definitions at the first call to the funciton during the lifetime of a backend Have a table with a char 20 attribute and some PL pgSQL functions that deal with it s content in local variables Now someone decides that char 20 isn t enough dumps the table drops it recreates it now with the attribute in question defined as char 40 and restores the data Ha he forgot about the funcitons The computations inside them will truncate the values to 20 characters But if they are defined using the class field TYPE declarations they will automagically handle the size change or if the new table schema defines the attribute as text type 64 Chapter 11 Procedural Languages Expressions All expressions used in PL pgSQL statements are processed using the backends executor Expressions which appear to contain constants may in fact require run time evaluation e g now for the datetime type so it is impossible for the
281. sl avail min sh sh avail s sl avail AS total avail FROM shoe ready shoe ready shoe ready OLD shoe ready NEW shoe rsh shoelace rsl shoe OLD shoe NEW shoe data sh unit un shoelace OLD shoelace NEW shoelace data s unit u WHERE int4ge min sh sh avail s sl avail 2 AND bpchareq s sl color sh slcolor AND float8ge float8mul s sl len u un fact float8mul sh slminlen un un fact AND float8le float8mul s sl len u un fact float8mul sh slmaxlen un un fact AND bpchareq sh slunit un un name AND bpchareq s sl unit u un name Again we reduce it to a real SQL statement that is equivalent to the final output of the rule system SELECT sh shoename sh sh avail s sl name s sl avail min sh sh avail s sl avail AS total avail FROM shoe data sh shoelace data s unit u unit un WHERE min sh sh avail s sl avail 2 AND s sl color sh slcolor AND s sl len u un fact sh slminlen un un fact AND s sl len u un fact sh slmaxlen un un fact AND sh sl unit un un name AND s sl unit u un name Recursive processing of rules rewrote one SELECT from a view into a parsetree that is equivalent to exactly that what Al had to type if there would be no views at all Note There is currently no recursion stopping mechanism for view rules in the rule system only for the other rules This doesn t hurt much because the only way to push this into an endless loop blowing up the backen
282. ssions between the SELECT and the FROM keywords is just an abbreviation for all the attribute names of a relation DELETE queries don t need a targetlist because they don t produce any result In fact the optimizer will add a special entry to the empty targetlist But this is after the rule system and will be discussed later For the rule system the targetlist is empty In INSERT queries the targetlist describes the new rows that should go into the resultrelation Missing columns of the resultrelation will be added by the optimizer with a constant NULL expression It is the expressions in the VALUES clause or the ones from the SELECT clause on INSERT SELECT On UPDATE queries it describes the new rows that should replace the old ones Here now the optimizer will add missing columns by inserting expressions that put the values from the old rows into the new one And it will add the special entry like for DELETE too It is the expressions from the SET attribute expression part of the query Every entry in the targetlist contains an expression that can be a constant value a variable pointing to an attribute of one of the relations in the rangetable a parameter or an expression tree made of function calls constants variables operators etc the qualification The queries qualification is an expression much like one of those contained in the targetlist entries The result value of this expression is a boolean that tells if the operat
283. st inserted is visible to queries in an AFTER trigger and to queries in BEFORE AFTER triggers fired after this 81 Chapter 13 Triggers Examples There are more complex examples in in src test regress regress c and in contrib spi Here is a very simple example of trigger usage Function trigf reports the number of tuples in the triggered relation ttest and skips the operation if the query attempts to insert NULL into x 1 e it acts as a NOT NULL constraint but doesn t abort the transaction include executor spi h this is what you need to work with SPI include commands trigger h and triggers HeapTuple trigf void HeapTuple trigf TupleDesc tupdesc HeapTuple rettuple char when bool checknull false bool isnull int ret i if CurrentTriggerData elog WARN trigf triggers are not initialized tuple to return to Executor if TRIGGER FIRED BY UPDATE CurrentTriggerData gt tg event rettuple CurrentTriggerData gt tg newtuple else rettuple CurrentTriggerData gt tg trigtuple check for NULLs if TRIGGER_FIRED_BY DELETE CurrentTriggerData gt tg_ event amp amp TRIGGER FIRED BEFORE CurrentTriggerData gt tg event checknull true if TRIGGER FIRED BEFORE CurrentTriggerData tg event when before else when after tupdesc CurrentTriggerData tg relation rd att CurrentTriggerData NULL Connect to SPI manager if
284. sults and x A y equals NOT x B y for all possible inputs x y Notice that B is also the negator of A For example lt and gt are a negator pair for most datatypes An operator can never be validly be its own negator Unlike COMMUTATOR a pair of unary operators could validly be marked as each others negators that would mean A x equals NOT B x for all x or the equivalent for right unary operators An operator s negator must have the same left and or right argument types as the operator itself so just as with COMMUTATOR only the operator name need be given in the NEGATOR clause Providing NEGATOR is very helpful to the query optimizer since it allows expressions like NOT x y to be simplified into x lt gt y This comes up more often than you might think because NOTs can be inserted as a consequence of other rearrangements Pairs of negator operators can be defined using the same methods explained above for commutator pairs RESTRICT The RESTRICT clause if provided names a restriction selectivity estimation function for the operator note that this is a function name not an operator name RESTRICT clauses only make sense for binary operators that return boolean The idea behind a restriction selectivity estimator is to guess what fraction of the rows in a table will satisfy a WHERE clause condition of the form field OP constant for the current operator and a particular constant value This assists
285. sword field hide value Dp Debug options don t create a field by default int dispsize Field size in characters for dialog D Returns the address of the connection options structure This may be used to determine all possible PQconnectdb options and their current default values The return value points to an array of PQconninfoOption structs which ends with an entry having a NULL keyword 118 Chapter 16 libpq pointer Note that the default values val fields will depend on environment variables and other context Callers must treat the connection options data as read only POfinish Close the connection to the backend Also frees memory used by the PGconn object void POfinish PGconn conn Note that even if the backend connection attempt fails as indicated by PQstatus the application should call PQfinish to free the memory used by the PGconn object The PGconn pointer should not be used after PQfinish has been called PQreset Reset the communication port with the backend void POreset PGconn conn This function will close the connection to the backend and attempt to reestablish a new connection to the same postmaster using all the same parameters previously used This may be useful for error recovery if a working connection is lost libpq application programmers should be careful to maintain the PGconn abstraction Use the accessor functions below to get at the contents of PGconn Avoid directly referencing
286. t Outputs fd A file descriptor for use in later pg lo routines Description pg lo open open an Inversion Large Object Usage Mode can be either r w or rw 157 Chapter 18 pgtcl pg_lo_close Name pg_lo_close close a large object Synopsis pg_lo close conn fd Inputs conn Specifies a valid database connection fd A file descriptor for use in later pg_lo routines Outputs None Description pg_lo_close closes an Inversion Large Object Usage pg lo read Name pg lo read read a large object Synopsis pg lo read conn fd bufVar len Inputs conn Specifies a valid database connection fd 158 Chapter 18 pgtcl File descriptor for the large object from pg_lo_open bufVar Specifies a valid buffer variable to contain the large object segment len Specifies the maximum allowable size of the large object segment Outputs None Description pg_lo_read reads at most len bytes from a large object into a variable named bufVar Usage bufVar must be a valid variable name 159 Chapter 18 pgtcl pg lo write Name pg lo write write a large object Synopsis pg lo write conn fd buf len Inputs conn Specifies a valid database connection fd File descriptor for the large object from pg lo open buf Specifies a valid string variable to write to the large object len Specifies the maximum size of the string to write Outputs None Description pg l
287. t AND oprleft SELECT oid FROM pg type WHERE typname complex abs UPDATE pg operator SET oprrest intltsel regproc oprjoin intltjoinsel WHERE oprname AND oprleft oprright AND oprleft SELECT oid FROM pg type WHERE typname complex abs UPDATE pg operator SET oprrest intltsel regproc oprjoin intltjoinsel WHERE oprname AND oprleft oprright AND oprleft SELECT oid FROM pg type WHERE typname complex abs UPDATE pg operator SET oprrest intgtsel regproc oprjoin intgtjoinsel WHERE oprname gt AND oprleft oprright AND oprleft SELECT oid FROM pg type WHERE typname complex abs UPDATE pg operator SET oprrest intgtsel regproc oprjoin intgtjoinsel WHERE oprname gt AND oprleft oprright AND oprleft SELECT oid FROM pg type WHERE typname complex abs And last Finally we register a description of this type INSERT INTO pg description objoid description SELECT oid Two part G L account FROM pg type WHERE typname complex abs 58 Chapter 10 GiST Indices The information about GIST is at http GiST CS Berkeley EDU 8000 gist with more on different indexing and sorting schemes at http s2k ftp CS Berkeley EDU 8000 personal mh And there is more interesting reading at the Berkely database site at http epoch cs berkeley edu 8000 Author This extraction from an e mail sent by Eugene Selkov Jr mailto selkovjr mcs an
288. tallation instructions at the above listed URL 2 Unpack the distribution file run configure make and make install to put the byte compiled files and info library in place 3 Then add the following lines to your usr local share emacs site lisp site start el file to make Emacs properly load PSGML when needed setq load path cons usr local share emacs site lisp psgml load path autoload sgml mode psgml Major mode to edit SGML files t 241 Appendix DG2 Documentation 4 If you want to use PSGML when editing HTML too also add this setq auto mode alist cons AXN s html AX sgml mode auto mode alist 5 There is one important thing to note with PSGML its author assumed that your main SGML DTD directory would be usr local lib sgml If as in the examples in this chapter you use usr local share sgml you have to compensate for this a You can set the SGML CATALOG FILES environment variable b You can customize your PSGML installation its manual tells you how c You can even edit the source file psgml el before compiling and installing PSGML changing the hard coded paths to match your own default Installing JadeTeX If you want to you can also install JadeTeX to use TeX as a formatting backend for Jade Note that this is still quite unpolished software and will generate printed output that is inferior to what you get from the RTF backend Still it works all right especially for simpler documents
289. tg_nargs result in a NULL value Second they must return either NULL or a record row containing exactly the structure of the table the trigger was fired for Triggers fired AFTER might always return a NULL value with no effect Triggers fired BEFORE signal the trigger manager to skip the operation for this actual row when returning NULL Otherwise the returned record row replaces the inserted updated row in the operation It is possible to replace single values directly in NEW and return that or to build a complete new record row to return Exceptions Postgres does not have a very smart exception handling model Whenever the parser planner optimizer or executor decide that a statement cannot be processed any longer the whole transaction gets aborted and the system jumps back into the mainloop to get the next query from the client application It is possible to hook into the error mechanism to notice that this happens But currently it s impossible to tell what really caused the abort input output conversion error floating point error parse error And it is possible that the database backend is in an inconsistent state at this point so returning to the upper executor or issuing more commands might corrupt the whole database And even if at this point the information that the transaction is aborted is already sent to the client application so resuming operation does not make any sense Thus the only thing PL pgSQL currently does when
290. that directory You ll also want the ISO character set mappings and probably one or more versions of HTML One way to install the various DTD and support files and set up the catalog file is to collect them all into the above mentioned directory use a single file named CATALOG to describe them all and then create the file catalog as a catalog pointer to the former by giving it the single line of content CATALOG usr local share sgml CATALOG The CATALOG file should then contain three types of lines The first is the optional SGML declaration thus SGMLDECL docbook dcl Next the various references to DTD and entity files must be resolved For the DocBook files these lines look like this PUBLIC Davenport DTD DocBook V3 0 EN docbook dtd 240 Appendix DG2 Documentation PUBLIC USA DOD DTD Table Model 951010 EN cals tbl dtd PUBLIC Davenport ELEMENTS DocBook Information Pool V3 0 EN dbpool mod PUBLIC Davenport ELEMENTS DocBook Document Hierarchy V3 0 EN dbhier mod PUBLIC Davenport ENTITIES DocBook Additional General Entities V3 0 EN dbgenent mod Of course a file containing these comes with the DocBook kit Note that the last item on each of these lines is a file name given here without a path You can put the files in subdirectories of your main SGML directory 1f you like of course and modify the reference in the CATALOG file DocBook also references the ISO character set entities so
291. that have very high or very low selectivity even if they aren t really equality or inequality For example the regular expression matching operators etc use eqsel on the assumption that they ll usually only match a small fraction of the entries in a table The JOIN clause if provided names a join selectivity estimation function for the operator note that this is a function name not an operator name JOIN clauses only make sense for binary operators that return boolean The idea behind a join selectivity estimator is to guess what fraction of the rows in a pair of tables will satisfy a WHERE clause condition of the form tablel field1 OP table2 field2 for the current operator As with the RESTRICT clause this helps the optimizer very substantially by letting it figure out which of several possible join sequences is likely to take the least work As before this chapter will make no attempt to explain how to write a join selectivity estimator function but will just suggest that you use one of the standard estimators if one is applicable eqjoinsel for neqjoinsel for lt gt intltjoinsel for or intgtjoinsel for or HASHES The HASHES clause if present tells the system that it is OK to use the hash join method for a join based on this operator HASHES only makes sense for binary operators that return boolean and in practice the operator had better be equality for some data type The assumption underlying
292. the same as described for spi exec happens for the loop body and the variables for the fields selected Here s an example for a PL Tcl function using a prepared plan CREATE FUNCTION t1 count int4 int4 RETURNS int4 AS if info exists GD plan H prepare the saved plan on the first call set GD plan spi prepare NN SELECT count AS cnt FROM t1 WHERE num gt 1 AND num lt 2 AM int4 Spi execp count 1 GD plan list 1 2 return cnt LANGUAGE pltcl Note that each backslash that Tcl should see must be doubled in the query creating the function since the main parser processes backslashes too on CREATE FUNCTION Inside the query string given to spi prepare should really be dollar signs to mark the parameter positions and to not let 1 be substituted by the value given in the first function call Modules and the unknown command PL Tcl has a special support for things often used It recognizes two magic tables pltcl modules and pltcl_modfuncs If these exist the module unknown is loaded into the interpreter right after creation Whenever an unknown Tcl procedure is called the unknown proc is asked to check if the procedure is defined in one of the modules If this is true the module is loaded on demand To enable this behavior the PL Tcl call handler must be compiled with DPLTCL UNKNOWN SUPPORT set There are support scripts to maintain these tables in the modules subdirectory of the PL Tcl source i
293. this saved plan Usage If nulls is NULL then SPI_execp assumes that all values if any are NOT NULL Note If one of the objects a relation function etc referenced by the prepared plan is dropped during your session by your backend or another process then the results of SPI_execp for this plan will be unpredictable 94 Chapter 14 Server Programming Interface Interface Support Functions All functions described below may be used by connected and unconnected procedures SPI_copytuple Name SPI_copytuple Makes copy of tuple in upper Executor context Synopsis SPI_copytuple tuple Inputs HeapTuple tuple Input tuple to be copied Outputs HeapTuple Copied tuple non NULL if tuple is not NULL and the copy was successful NULL only if tuple is NULL Description SPI_copytuple makes a copy of tuple in upper Executor context See the section on Memory Management Usage TBD 95 Chapter 14 Server Programming Interface SPI_modifytuple Name SPI_modifytuple Modifies tuple of relation Synopsis SPI modifytuple rel tuple nattrs attnum Values Nulls Inputs Relation rel HeapTuple tuple Input tuple to be modified int nattrs Number of attribute numbers in attnum int attnum Array of numbers of the attributes which are to be changed Datum Values New values for the attributes specified char Nulls Which attributes are NULL if any Outputs HeapTuple New tuple with mod
294. tice that that s the numeral 0 and not the letter O For example simple ULTRIX example cc G 0 c foo c produces an object file called foo o that can then be dynamically loaded into Postgres No additional loading or link editing must be performed DEC OSF 1 Under DEC OSF 1 you can take any simple object file and produce a shared object file by running the ld command over it with the correct options The commands to do this look like simple DEC OSF 1 example cc c foo c ld shared expect unresolved o foo so foo o o oe dk The resulting shared object file can then be loaded into Postgres When specifying the object file name to the create function command one must give it the name of the shared object file ending in so rather than the simple object file Tip Actually Postgres does not care what you name the file as long as it is a shared object file If you prefer to name your shared object files with the extension o this is fine with Postgres so long as you make sure that the correct file name is given to the create function command In other words you must simply be consistent However from a pragmatic point of view we discourage this practice because you will undoubtedly confuse yourself with regards to which files have been made into shared object files and which have not For example it s very hard to write Makefiles to do the link editing automatically if both the object file and the s
295. time now text FROM shoelace arrive shoelace arrive shoelace ok shoelace ok shoelace ok OLD shoelace ok NEW shoelace shoelace shoelace OLD shoelace NEW shoelace data showlace data shoelace OLD shoelace NEW shoelace data s unit u shoelace data OLD shoelace data NEW shoelace log shoelace log WHERE bpchareq s sl name showlace arrive arr name AND bpchareq shoelace data sl name s sl name AND int4ne int4pl s sl avail shoelace arrive arr quant S sl avail 45 Chapter 8 The Postgres Rule System After that the rule system runs out of rules and returns the generated parsetrees So we end up with two final parsetrees that are equal to the SQL statements INSERT INTO shoelace_log SELECT s sl name s sl_avail shoelace_arrive arr_quant getpgusername now FROM shoelace_arrive shoelace arrive shoelace data shoelace data shoelace data s WHERE s sl_ name shoelace arrive arr name AND shoelace data sl name s sl name AND s sl avail shoelace arrive arr quant s sl avail UPDATE shoelace data SET Sl avail shoelace data sl avail shoelace arrive arr quant FROM shoelace arrive shoelace arrive shoelace data shoelace data shoelace data s WHERE s sl name shoelace arrive sl name AND shoelace data sl name s sl name The result is that data coming from one relation inserted into another changed into updates on a third changed into updating a fourth plus logging that final update in
296. tion is established c From that point on the frontend process and the backend server communicate without intervention by the postmaster Hence the postmaster is always running waiting for requests whereas frontend and backend processes come and go The libpq library allows a single frontend to make multiple connections to backend processes However the frontend application is still a single threaded process Multithreaded frontend backend connections are not currently supported in libpq One implication of this architecture is that the postmaster and the backend always run on the same machine the database server while the frontend application may run anywhere You should keep this in mind because the files that can be accessed on a client machine may not be accessible or may only be accessed using a different filename on the database server machine You should also be aware that the postmaster and postgres servers run with the user id of the Postgres superuser Note that the Postgres superuser does not have to be a special user e g a user named postgres although many systems are installed that way Furthermore the Postgres superuser should definitely not be the UNIX superuser root In any case all files relating to a database should belong to this Postgres superuser Figure 2 1 How a connection is established a frontend sends request to postmaster via well known network socket User App LIBPQ c
297. tion not included in this document and also includes precompiled drivers for v6 4 and earlier 189 Chapter 22 Overview of PostgreSQL Internals Author This chapter originally appeared as a part of Simkovics 1998 Stefan Simkovics Master s Thesis prepared at Vienna University of Technology under the direction of O Univ Prof Dr Georg Gottlob and Univ Ass Mag Katrin Seyr This chapter gives an overview of the internal structure of the backend of Postgres After having read the following sections you should have an idea of how a query is processed Don t expect a detailed description here I think such a description dealing with all data structures and functions used within Postgres would exceed 1000 pages This chapter is intended to help understanding the general control and data flow within the backend from receiving a query to sending the results The Path of a Query Here we give a short overview of the stages a query has to pass in order to obtain a result 1 A connection from an application program to the Postgres server has to be established The application program transmits a query to the server and receives the results sent back by the server The parser stage checks the query transmitted by the application program client for correct syntax and creates a query tree The rewrite system takes the query tree created by the parser stage and looks for any rules stored in the system catalogs to apply to the querytree
298. to any back end database regardless of the vendor as long as the database schema is the same For example you could have MS SQL Server and Postgres servers which have exactly the same data Using ODBC your Windows application would make exactly the same calls and the back end data source would look the same to the Windows app Insight Distributors http www insightdist com provides active and ongoing support for the core psqlODBC distribution They provide a FAQ http www insightdist com psqlodbc ongoing development on the code base and actively participate on the interfaces mailing list mailto interfaces postgresql org Windows Applications In the real world differences in drivers and the level of ODBC support lessens the potential of ODBC Access Delphi and Visual Basic all support ODBC directly Under C such as Visual C you can use the C ODBC API In Visual C you can use the CRecordSet class which wraps the ODBC API set within an MFC 4 2 class This is the easiest route if you are doing Windows C development under Windows NT Writing Applications If I write an application for Postgres can I write it using ODBC calls to the Postgres server or is that only when another database program like MS SQL Server or Access needs to access the data 175 Chapter 20 ODBC Interface The ODBC APT is the way to go For Visual C coding you can find out more at Microsoft s web site or in your VC docs
299. to execute queries Some utility SPI functions may be called from un connected procedures You may get SPI ERROR CONNECT error if SPI connect is called from an already connected procedure e g if you directly call one procedure from another connected one Actually while the child procedure will be able to use SPI your parent procedure will not be 86 Chapter 14 Server Programming Interface able to continue to use SPI after the child returns if SPI finish is called by the child It s bad practice Usage XXX thomas 1997 12 24 Algorithm SPI connect performs the following Initializes the SPI internal structures for query execution and memory management SPI finish Name SPI finish Disconnects your procedure from the SPI manager Synopsis SPI finish void Inputs None Outputs int SPI OK FINISH if properly disconnected SPI ERROR UNCONNECTED if called from an un connected procedure 87 Chapter 14 Server Programming Interface Description SPI_finish closes an existing connection to the Postgres backend You should call this function after completing operations through the SPI manager You may get the error return SPI ERROR_UNCONNECTED if SPI finish is called without having a current valid connection There is no fundamental problem with this it means that nothing was done by the SPI manager Usage SPI finish must be called as a final step by a connected procedure or you may get unpredictable results
300. to time Anonymous CVS 1 You will need a local copy of CVS Concurrent Version Control System which you can get from http www cyclic com or any GNU software archive site We currently recommend version 1 10 the most recent at the time of writing Many systems have a recent version of cvs installed by default 2 Do an initial login to the CVS server cvs d pserver anoncvs postgresql org usr local cvsroot login You will be prompted for a password enter postgresql You should only need to do this once since the password will be saved in cvspass in your home directory 3 Fetch the Postgres sources cvs Z3 d pserver anoncvs postgresql org usr local cvsroot co P pgsql which installs the Postgres sources into a subdirectory pgsql of the directory you are currently in 226 Appendix DG1 The CVS Repository Note If you have a fast link to the Internet you may not need z3 which instructs CVS to use gzip compression for transferred data But on a modem speed link it s a very substantial win This initial checkout is a little slower than simply downloading a tar gz file expect it to take 40 minutes or so if you have a 28 8K modem The advantage of CVS doesn t show up until you want to update the file set later on 4 Whenever you want to update to the latest CVS sources cd into the pgsql subdirectory and issue cvs z3 update d P This will fetch only the changes since the last time you update
301. tored in a normal SQL table They are stored as a Table Index pair and are refered to from your own tables by an OID value 187 Chapter 21 JDBC Interface Now there are you methods of using Large Objects The first is the standard JDBC way and is documented here The other uses our own extension to the api which presents the libpq large object API to Java providing even better access to large objects than the standard Internally the driver uses the extension to provide large object support In JDBC the standard way to access them is using the getBinaryStream method in ResultSet and setBinaryStream method in PreparedStatement These methods make the large object appear as a Java stream allowing you to use the java io package and others to manipulate the object For example suppose you have a table containing the file name of an image and a large object containing that image create table images imgname name imgoid oid To insert an image you would use File file new File myimage gif FileInputStream fis new FileInputStream file PreparedStatement ps conn prepareStatement insert into images values 2 22 T ps setString 1 file getName ps setBinaryStream 2 fis file length ps executeUpdate ps close fis close Now in this example setBinaryStream transfers a set number of bytes from a stream into a large object and stores the OID into the field holding a reference to it Retrie
302. tree list is empty There can be zero NOTHING keyword one or multiple actions To simplify we look at a rule with one action This rule can have a qualification or not and it can be INSTEAD or not 38 Chapter 8 The Postgres Rule System What is a rule qualification It is a restriction that tells when the actions of the rule should be done and when not This qualification can only reference the NEW and or OLD pseudo relations which are basically the relation given as object but with a special meaning So we have four cases that produce the following parsetrees for a one action rule No qualification and not INSTEAD The parsetree from the rule action where the original parsetrees qualification has been added No qualification but INSTEAD The parsetree from the rule action where the original parsetrees qualification has been added Qualification given and not INSTEAD The parsetree from the rule action where the rule qualification and the original parsetrees qualification have been added Qualification given and INSTEAD The parsetree from the rule action where the rule qualification and the original parsetrees qualification have been added The original parsetree where the negated rule qualification has been added Finally if the rule is not INSTEAD the unchanged original parsetree is added to the list Since only qualified INSTEAD rules already add the original parsetree we end up with a total maximum of two parse
303. trees for a rule with one action The parsetrees generated from rule actions are thrown into the rewrite system again and maybe more rules get applied resulting in more or less parsetrees So the parsetrees in the rule actions must have either another commandtype or another resultrelation Otherwise this recursive process will end up in a loop There is a compiled in recursion limit of currently 10 iterations If after 10 iterations there are still update rules to apply the rule system assumes a loop over multiple rule definitions and aborts the transaction The parsetrees found in the actions of the pg_rewrite system catalog are only templates Since they can reference the rangetable entries for NEW and OLD some substitutions have to be made before they can be used For any reference to NEW the targetlist of the original query is searched for a corresponding entry If found that entries expression is placed into the reference Otherwise NEW means the same as OLD Any reference to OLD is replaced by a reference to the rangetable entry which is the resultrelation A First Rule Step by Step We want to trace changes to the sl_avail column in the shoelace_data relation So we setup a log table and a rule that writes us entries every time and UPDATE is performed on shoelace_data 39 Chapter 8 The Postgres Rule System CREATE TABLE shoelace log sl name char 10 shoelace changed sl_avail integer new available value log_who name
304. ture The new configuration and build files for the driver should make it a simple process to build the driver on the supported platforms Currently these include Linux and FreeBSD but we are hoping other users will contribute the necessary information to quickly expand the number of platforms for which the driver can be built There are actually two separate methods to build the driver depending on how you received it and these differences come down to only where and how to run configure and make The driver can be built in a standalone client only installation or can be built as a part of the main Postgres distribution The standalone installation is convenient if you have ODBC client applications on multiple heterogeneous platforms The integrated installation is convenient when the target client is the same as the server or when the client and server have similar runtime configurations Specifically if you have received the psqlODBC driver as part of the Postgres distribution from now on referred to as an integrated build then you will configure and make the ODBC driver from the top level source directory of the Postgres distribution along with the rest of its libraries If you received the driver as a standalone package than you will run configure and make from the directory in which you unpacked the driver source 176 Chapter 20 ODBC Interface Integrated Installation This installation procedure is appropriate for an integrated insta
305. two successors one attached to the field lefttree and the second attached to the field righttree Each of the subnodes represents one relation of the join As mentioned above a merge sort join requires each relation to be sorted That s why we find a Sort node in each subplan The additional qualification given in the query s sno gt 2 is pushed down as far as possible and is attached to the qpqual field of the leaf SeqScan node of the corresponding subplan The list attached to the field mergeclauses of the MergeJoin node contains information about the join attributes The values 65000 and 65001 for the varno fields in the VAR nodes 195 Chapter 22 Overview of PostgreSQL Internals appearing in the mergeclauses list and also in the targetlist mean that not the tuples of the current node should be considered but the tuples of the next deeper nodes i e the top nodes of the subplans should be used instead Note that every Sort and SeqScan node appearing in figure ref plan has got a targetlist but because there was not enough space only the one for the MergeJoin node could be drawn Another task performed by the planner optimizer is fixing the operator ids in the Expr and Oper nodes As mentioned earlier Postgres supports a variety of different data types and even user defined types can be used To be able to maintain the huge amount of functions and operators it is necessary to store them in a system table Each function and operator
306. ule present that has to be applied to the query it rewrites the tree to an alternate form 193 Chapter 22 Overview of PostgreSQL Internals Techniques To Implement Views Now we will sketch the algorithm of the query rewrite system For better illustration we show how to implement views using rules as an example Let the following rule be given create rule view rule as on select to test view do instead Select s sname p pname from supplier s sells se part p where s sno se sno and p pno se pno The given rule will be fired whenever a select against the relation test_view is detected Instead of selecting the tuples from test_view the select statement given in the action part of the rule is executed Let the following user query against test_view be given select sname from test_view where sname lt gt Smith Here is a list of the steps performed by the query rewrite system whenever a user query against test_view appears The following listing is a very informal description of the algorithm just intended for basic understanding For a detailed description refer to Stonebraker et al 1989 test_view Rewrite 1 Take the query given in the action part of the rule 2 Adapt the targetlist to meet the number and order of attributes given in the user query 3 Add the qualification given in the where clause of the user query to the qualification of the query given in the action part of the rule Given the rule defin
307. ule trigger should constraint delete rows from software that reference the deleted host Since the trigger is called for each individual row deleted from computer it can use the statement DELETE FROM software WHERE hostname 1 in a prepared and saved plan and pass the hostname in the parameter The rule would be written as CREATE RULE computer del AS ON DELETE TO computer DO DELETE FROM software WHERE hostname OLD hostname Now we look at different types of deletes In the case of a DELETE FROM computer WHERE hostname mypc local net the table computer is scanned by index fast and the query issued by the trigger would also be an index scan fast too The extra query from the rule would be a DELETE FROM software WHERE computer hostname AND software hostname mypc local net computer hostname 49 Chapter 8 The Postgres Rule System Since there are appropriate indices setup the optimizer will create a plan of Nestloop gt Index Scan using comp hostidx on computer gt Index Scan using soft_hostidx on software So there would be not that much difference in speed between the trigger and the rule implementation With the next delete we want to get rid of all the 2000 computers where the hostname starts with old There are two possible queries to do that One is DELETE FROM computer WHERE hostname old AND hostname lt ole Where the plan for the rule query will be a Hash Join gt Seq
308. urces 1 I failed to make use of the original HPUX Makefile and rearranged the Makefile from the ancient postgres95 tutorial to do the job I tried to keep it generic but Iam a very poor makefile writer just did some monkey work Sorry about that but I guess it is now a little more portable that the original makefile 2 I built the example sources right under pgsql src just extracted the tar file there The aforementioned Makefile assumes it is one level below pgsql sre in our case in pgsql src pggist 3 The changes I made to the c files were all about include s function prototypes and typecasting Other than that I just threw away a bunch of unused vars and added a couple parentheses to please gcc I hope I did not screw up too much 4 There is a comment in polyproc sql there s a memory leak in rtree poly ops create index pix2 on polytmp using rtree p poly ops 59 Chapter 10 GiST Indices Roger that I thought it could be related to a number of Postgres versions back and tried the query My system went nuts and I had to shoot down the postmaster in about ten minutes I will continue to look into GiST for a while but I would also appreciate more examples of R tree usage 60 Chapter 11 Procedural Languages Beginning with the release of version 6 3 Postgres supports the definition of procedural languages In the case of a function or trigger procedure defined in a procedural language th
309. urrently not supported Well it s easy to rewrite a simple SELECT into a union But it is a little difficult if the view is part of a join doing an update ORDER BY clauses in view definitions aren t supported DISTINCT isn t supported in view definitions There is no good reason why the optimizer should not handle parsetree constructs that the parser could never produce due to limitations in the SQL syntax The author hopes that these items disappear in the future Implementation Side Effects Using the described rule system to implement views has a funny side effect The following does not seem to work al bundy INSERT INTO shoe shoename sh avail slcolor al_bundy gt VALUES sh5 0 black INSERT 20128 1 al_bundy gt SELECT shoename sh_avail slcolor FROM shoe data shoename sh avail slcolor AA AAA E shi 2 black 37 Chapter 8 The Postgres Rule System sh3 4 brown sh2 O black sh4 3 brown 4 rows The interesting thing is that the return code for INSERT gave us an object ID and told that 1 row has been inserted But it doesn t appear in shoe_data Looking into the database directory we can see that the database file for the view relation shoe seems now to have a data block And that is definitely the case We can also issue a DELETE and if it does not have a qualification it tells us that rows have been deleted and the next vacuum run will reset the file to zero size The reason for t
310. ust either project an attribute out of the instance or pass the entire instance into another function SELECT name new_emp AS nobody nobody None The reason why in general we must use the function syntax for projecting attributes of function return values is that the parser just doesn t understand the other dot syntax for projection when combined with function calls SELECT new_emp name AS nobody WARN parser syntax error at or near Any collection of commands in the SQL query language can be packaged together and defined as a function The commands can include updates i e insert update and delete as well as select queries However the final command must be a select that returns whatever is specified as the function s returntype CREATE FUNCTION clean_EMP RETURNS int4 AS DELETE FROM EMP WHERE EMP salary lt 0 SELECT 1 AS ignore this LANGUAGE sql SELECT clean EMP x 1 Programming Language Functions Programming Language Functions on Base Types Internally Postgres regards a base type as a blob of memory The user defined functions that you define over a type in turn define the way that Postgres can operate on it That is Postgres will only store and retrieve the data from disk and use your user defined functions to input process and output the data Base types can have one of three internal formats pass by value
311. values for i whence are SEEK_SET SEEK_CUR and SEEK_END Closing a Large Object Descriptor A large object may be closed by calling int lo close PGconn conn int fd where fd is a large object descriptor returned by lo open On success lo close returns zero On error the return value is negative Built in registered functions There are two built in registered functions lo import and lo export which are convenient for use in SQL queries Here is an example of their use CREATE TABLE image name text raster oid INSERT INTO image name raster VALUES beautiful image lo import etc motd SELECT lo export image raster tmp motd from image WHERE name beautiful image Accessing Large Objects from LIBPQ Below is a sample program which shows how the large object interface in LIBPQ can be used Parts of the program are commented out but are left in the source for the readers benefit This program can be found in src test examples Frontend applications which use the large object interface in LIBPQ should include the header file libpq libpq fs h and link with the libpq library 112 Chapter 15 Large Objects Sample Program ox ob ox ox ox testlo c test using large objects with libpq Copyright c 1994 Regents of the University of California Hinclude lt stdio h gt include libpq fe h include libpg libpq fs h define BUFSIZE 1024 le importFile import file
312. ving an image is even easier I m using PreparedStatement here but Statement can equally be used PreparedStatement ps con prepareStatement select oid from images where name ps setString 1 myimage gif ResultSet rs ps executeQuery if rs null while rs next InputStream is rs getBinaryInputStream 1 use the stream in some way here is close rs close ps close Now here you can see where the Large Object is retrieved as an InputStream You ll also notice that we close the stream before processing the next row in the result This is part of the JDBC Specification which states that any InputStream returned is closed when ResultSet next or ResultSet close is called Postgres Extensions to the JDBC API Postgres is an extensible database system You can add your own functions to the backend which can then be called from queries or even add your own data types 188 Chapter 21 JDBC Interface Now as these are facilities unique to us we support them from Java with a set of extension API s Some features within the core of the standard driver actually use these extensions to implement Large Objects etc Further Reading If you have not yet read it Pd advise you read the JDBC API Documentation supplied with Sun s JDK and the JDBC Specification Both are available on JavaSoft s web site http www javasoft com My own web site http www retep org uk contains updated informa
313. w SELECT Rules Work donee bd tre pre eee 30 View Rules in Non SELECT Statements eese ene 35 The Power OF Views In POSterTeS ote te bp epe ie I e de trees 36 BE o leticia 36 CONCERNS msn ici R 36 Implementation Side Effects essere enne 37 Rules on INSERT UPDATE and DELETE seen ener nennen 38 Differences to View Rules eere er Ee eee Mer reacio ente rds 38 How These Rules Work elites tt eee RR de 38 A First Rule Step by Step sse no roce orar on eee 39 Cooperation With Vie WS cerati anio anita 42 Rules and Permissions nte oe te RR ERREUR aE 48 Rules versus Detroit EE 49 9 Interfacing Extensions To Indices ccccccsccscscccccssecsscccscscscescsescssscsssssssscscssssssssesseeees 52 n 59 11 Procedural Languages eese esee eee estesa tn tn snnt tn snae ts tasas etes essen suse ss tasses tn sn sensn 61 Installing Procedural Languages eese retener ener 61 PL psSQL 4a eon etit PU ard OR a cal lens 62 OVetVIeW oes oca eec etat Gm epe ett n pee OI 62 D SCfFIptOD 4er ter e e ne p e d e depen deti moe 63 Str cture of PI pgSQL 5 25 n ote te erbe E pee riores i n 63 COMMEN a 6e da 63 D clarations orion 63 Data Types ape epa mate nm Dip 64 EXpresSlOns iis soot eene nep epe a 65 Statements erre ERU T Er OR EE ES 66 Trigger Procedures 55 no o et eG aaeeea De
314. who did it log when datetime when de CREATE RULE log shoelace AS ON UPDATE TO shoelace data WHERE NEW sl avail OLD sl avail DO INSERT INTO shoelace log VALUES NEW sl name NEW sl avail getpgusername now text Jud One interesting detail is the casting of now in the rules INSERT action to type text Without that the parser would see at CREATE RULE time that the target type in shoelace log is a datetime and tries to make a constant from it with success So a constant datetime value would be stored in the rule action and all log entries would have the time of the CREATE RULE statement Not exactly what we want The casting causes that the parser constructs a datetime now text from it and this will be evaluated when the rule is executed Now Al does al bundy UPDATE shoelace data SET sl avail 6 al bundy WHERE sl name s17 and we look at the logtable al bundy SELECT FROM shoelace log Sl name sl avail log who log when AA PA A ehh A o sedie s17 6 A1 Tue Oct 20 16 14 45 1998 MET DST 1 row That s what we expected What happened in the background is the following The parser created the parsetree this time the parts of the original parsetree are highlighted because the base of operations is the rule action for update rules UPDATE shoelace data SET sl avail 6 FROM shoelace data shoelace data WHERE bpchareq shoelace data sl name s17 There is a rule log shoelace that is ON
315. with local ticket files This environment variable is only used if Kerberos authentication is selected by the backend PGOPTIONS sets additional runtime options for the Postgres backend PGTTY sets the file or tty on which debugging messages from the backend server are displayed The following environment variables can be used to specify user level default behavior for every Postgres session PGDATESTYLE sets the default style of date time representation PGTZ sets the default time zone The following environment variables can be used to specify default internal behavior for every Postgres session PGGEQO sets the default mode for the genetic optimizer PGRPLANS sets the default mode to allow or disable right sided plans in the optimizer PGCOSTHEAP sets the default cost for heap searches for the optimizer PGCOSTINDEX sets the default cost for indexed searches for the optimizer PGQUERY LIMIT sets the maximum number of rows returned by a query Refer to the SET SQL command for information on correct values for these environment variables 130 Chapter 16 libpq Caveats The query buffer is 8192 bytes long and queries over that length will be rejected Sample Programs Sample Program 1 ay testlibpq c Test the C version of Libpq the Postgres frontend library include lt stdio h gt include libpq fe h void exit nicely PGconn conn POfinish conn exit 1 main char pghost
316. y Configure the standalone installation configure The configuration can be done with options configure prefix rootdir with odbc inidir where prefix installs the libraries and headers in the directories rootdir lib and rootdir include iodbc and with odbc installs odbcinst ini in the specified directory Note that both of these options can also be used from the integrated build but be aware that when used in the integrated build prefix will also apply to the rest of your Postgres installation with odbc applies only to the configuration file odbcinst ini Compile and link the source code make ODBCINST instdir You can also override the default location for installation on the make command line This only applies to the installation of the library and header files Since the driver needs to know the location of the odbcinst ini file attempting to override the enviroment variable that specifies its installation directory will probably cause you headaches It is safest simply to allow the driver to install the odbcinst ini file in the default directory or the directory you specified on the configure command line with with odbc Install the source code make POSTGRESDIR targettree install To override the library and header installation directories separately you need to pass the correct installation variables on the make install command line These variables are LIBDIR HEADERDIR and ODBCINST Overriding P
317. y D Ullman 1 Computer Science Press 1988 PostgreSQL Specific Documentation The PostgreSQL Administrator s Guide Edited by Thomas Lockhart 1999 06 01 The PostgreSQL Global Development Group The PostgreSQL Developer s Guide Edited by Thomas Lockhart 1999 06 01 The PostgreSQL Global Development Group The PostgreSQL Programmer s Guide Edited by Thomas Lockhart 1999 06 01 The PostgreSQL Global Development Group The PostgreSQL Tutorial Introduction Edited by Thomas Lockhart 1999 06 01 The PostgreSQL Global Development Group The PostgreSQL User s Guide Edited by Thomas Lockhart 1999 06 01 The PostgreSQL Global Development Group Enhancement of the ANSI SQL Implementation of PostgreSQL Stefan Simkovics O Univ Prof Dr Georg Gottlob November 29 1998 Department of Information Systems Vienna University of Technology Discusses SQL history and syntax and describes the addition of INTERSECT and EXCEPT constructs into Postgres Prepared as a Master s Thesis with the support of O Univ Prof Dr Georg Gottlob and Univ Ass Mag Katrin Seyr at Vienna University of Technology 244 Bibliography The Postgres95 User Manual A Yu and J Chen The POSTGRES Group Sept 5 1995 University of California Berkeley CA Proceedings and Articles Partial indexing in POSTGRES research project Olson 1993 Nels Olson 1993 UCB Engin T7 49 1993 0676 University of California Berkeley CA A Unified Framework for
318. you need to fetch and install these they are available from several sources and are easily found by way of the URLs listed above along with catalog entries for all of them such as PUBLIC ISO 8879 1986 ENTITIES Added Latin 1 EN ISO ISOlat1 Note how the file name here contains a directory name showing that we ve placed the ISO entity files in a subdirectory named ISO Again proper catalog entries should accompany the entity kit you fetch Installing Norman Walsh s DSSSL Style Sheets Installing Norman Walsh s DSSSL Style Sheets 1 2 Read the installation instructions at the above listed URL To install Norman s style sheets simply unzip the distribution kit in a suitable place A good place to dot this would be usr local share which places the kit in a directory tree under usr local share docbook The command will be something like unzip aU db119 zip One way to test the installation is to build the HTML and RTF forms of the PostgreSQL User s Guide a To build the HTML files go to the SGML source directory doc src sgml and say jade t sgml d usr local share docbook html docbook dsl D graphics postgres sgml book1 htm is the top level node of the output b To generate the RTF output ready for importing into your favorite word processing system and printing type jade t rtf d usr local share docbook print docbook dsl D graphics postgres sgml Installing PSGML Installing PSGML 1 Read the ins
319. ype h include ecpglib h exec sql begin declare section Hline 1 foo pgc int index int result exec sql end declare section ECPGdo LINE NULL select res from mytable where index D ECPGt_int 8 index 1L 1L sizeof int ECPGt NO INDICATOR NULL OL OL OL ECPGt EOIT ECPGt int amp result 1L 1L sizeof int ECPGt NO INDICATOR NULL OL OL OL ECPGt EORT Hline 147 foo pgc the indentation in this manual is added for readability and not something that the preprocessor can do The Library The most important function in the library is the ECPGdo function It takes a variable amount of arguments Hopefully we will not run into machines with limits on the amount of variables that can be accepted by a vararg function This could easily add up to 50 or so arguments The arguments are 173 Chapter 19 ecpg Embedded SOL in C A line number This is a line number for the original line used in error messages only A string This is the SQL request that is to be issued This request is modified by the input variables i e the variables that where not known at compile time but are to be entered in the request Where the variables should go the string contains Input variables As described in the section about the preprocessor every input variable gets ten arguments ECPGt_EOIT An enum telling that there are no more input variables Output variables
320. ystem as implemented in Postgres ensures that this is all information available about the query up to now Concerns There was a long time where the Postgres rule system was considered broken The use of rules was not recommended and the only part working where view rules And also these view rules made problems because the rule system wasn t able to apply them properly on other statements than a SELECT for example an UPDATE that used data from a view didn t work During that time development moved on and many features where added to the parser and optimizer The rule system got more and more out of sync with their capabilities and it became harder and harder to start fixing it Thus noone did For 6 4 someone locked the door took a deep breath and shuffled that damned thing up What came out was a rule system with the capabilities described in this document But there are still some constructs not handled and some where it fails due to things that are currently not supported by the Postgres query optimizer Views with aggregate columns have bad problems Aggregate expressions in qualifications must be used in subselects Currently it is not possible to do a join of two views each having an aggregate column and compare the two aggregate values in the qualification In the meantime it is possible to put these aggregate expressions into functions with the appropriate arguments and use them in the view definition Views of unions are c
Download Pdf Manuals
Related Search
Related Contents
macchina per sottovuoto 事例講演-1 平成25年度独立行政法人国民生活センター業務実績報告書⑦ PDFダウンロード - 商品スペック一覧 Installation Guide – Aquaflex Sensor (SI BROSS NET 06-10-11 Samsung LD190N Felhasználói kézikönyv Copyright © All rights reserved.
Failed to retrieve file