Providing the balance of an “old” and “new” knowledge in education content: examples from graphic technology teaching

 North-West Institute of Printing of the St. Petersburg State University of Technology and Design, Russia



Keywords: Graphic technology, Education, Knowledge, Specialization


Abstract. The approach to solve the “universal” training paradigm of providing optimal balance between “old” and “new” knowledge within course credits limits as well as other dialectic issues of education content are discussed at the background of graphic technology development retrospective analysis with an accent on informative nature of a print product.


1. Introduction

For many years printing technology was mostly considered as specialized material processing routine leaning, in general, upon various sections of chemistry, physics and photography. While other information technologies have appeared of modern “precise knowledge” - printing industry accumulates graphic data processing skills and experiences generated over the centuries, i.e. much earlier than the term IT appeared. Informational nature of a print has always motivated this technology training to take into account the general appeal of print matter to the cognitive and aesthetic perception of a customer. The coming IT era accentuates such nature of printing “raw material” and that of its final product.

Nevertheless, the physicochemical and mechanical component cannot be even partially reduced in graphic education content of today. Unlike other ITs, the output data has to be materialized here as a hard copy on a very different kind of print stock and with the use of steadily growing variety of equipment, consumables and processes. So, there is the need of combining the ever growing physicochemical component of printing technology with the computer and networking graphic data processing issues within a unified educational program.

The other challenge of graphic education consists in its having to embrace the whole spectra of printing technology applications. The average headcount in graphic industry doesn’t exceed 20. So, there is no room in a company for the other specialist with higher education. That, in turn, presents an obstacle to specialized training which would focus just on a certain kind of printing method or print product, as well as on pre-press, press or post-press.

On the background of growing variety of printing methods and facilities for their computerized control, a lot of dilettantish misconceptions have appeared as result of superficial interpretations of the processes in some manuals, advertising pamphlets, etc. Helping a trainee to adequately appreciate, for example, the “color myths”, Giorgianni and Madden placed in their book [1] the whole paragraph of such misconceptions’ analysis in the manner of myth/reality withstanding. Similar attempt in relation to the “halftoning myths” was also done in [2].

It also appears very useful to consider today the multistage printing technology as the physical communication channel of limited bandwidth and certain noise level, where the press, especially “digital”, looks as a specialized peripheral computer or network unit. In their turn, the prepress and especially its halftoning and color rendering stages can be considered, in the light of communication theory, as the processes of optimal graphic data encoding being provided under criteria of these data conformity with the properties both of a channel and an end user visual perception.

Several generations of pre-press technologies have replaced the former ones during last decades introducing novel imaging techniques and equipment. Each time that required, in its turn, the serious replacement of training content in relation to operating skills, image processing, color management, etc. Important, instead of completely putting aside, to substitute the previous stage related knowledge by detailed analysis of essentially negative features of former technique which forced to move to its next generation. Not out of place either to indicate some useful features inherent in previous stage but not available at the beginning of the next one or even later. Some examples of such analysis and accents are given below from the our learner experiences.


2. Some dialectic issues of the pre-press technology development

In Middle Ages the prepress comprised a page print forme making with both letters and pictures engraved by a single person. Following the invention of typesetting the page making was for over three centuries divided into two parallel text and image processing production lines. Each of them used the essentially different specific skills, tools and, during its latest period, rather sophisticated electronic equipment. About three decades ago, as schematically shown on Fig. 1, it once again converged into an entity due to the computer desktop facilities used again by the single operator.

Formulating the reasons for both of these processes’ initial separation and their recent convergence is important [3]. Explanation is based on the principal difference of those two kinds of input graphic data. Even if relatively conditional, the image comprises, the most part, the replica of visually perceived world. Text is, to the contrary, the initially ciphered data with its letters or hieroglyphs comprising the graphic codes of sounds or ideas. Such distinction of the processed matter has required, in turn, the different artistic skills of engraver and composer. Nowadays, it becomes possible to bring such distinctive data to the unified form of an eight digits combination appreciated by Mac or PC. In a text file, one of the 256 meanings of a byte can indicate a target font, while in an image file it points out a pixel tone value (halftone dot area, ink coverage).

Layout systems appeared shortly after such unified presentation became available for the volume of data sufficient for a whole page. First of all they were required in gravure printing to replace manual pasting up the original for illustrative magazine pages and, as well, to delete cumbersome scan unit providing the control signals for cylinder engraving heads. The first color system of that kind was created in the late 70s under the name Helio Data Processing (HDP) [4].

Earlier, there was often clear knowledge of what had to be done with image data but a lack of means to realize the task. Today we face quite a contrary situation of adequate resources but lacking in knowledge of what should be done [5]. Thirty years ago, there were a number of scientifically approved recommendations on, for example, the direction and degree of tone and color values correction for print quality improvement. However, even at the time of color electronic scanners, there was a lack of means for proper control in providing the desired variations. The digital image processing of today allows for practically unlimited variation of such parameters in any direction with the discretion of just 100 square microns of ink coverage. Nevertheless, some professionals currently point out the need of additional research and training which could substantiate the recommendations and performing methods for effective use of such precise, recently appeared control facilities. In this relation E. Enoksson [6] notes, for example, that only about a quarter of the Swedish print houses have people ever heard of the UCR/GCR functions of Photoshop.     


3. Moving forces and trade-offs within the step-to-step technology evolution       

A certain kind of academic problem arises periodically with the progressive implementation and partial or sometimes complete mutual displacement of engraving, photochemistry, electronics, laser, computer and networking techniques in graphic industry. In this respect the following stages of data processing for printing can be distinguished after the times of manual engraving:

-          photomechanical processes starting from 1880ies;

-          electronic reproduction    

                 analogue signals                             as of 1950;

                 digital signals                                 as of  1970;

-          computer processing systems

                 closed (CEPS)                                as of  1980;

                 open (PostScript based)                  as of 1990.                                 

Each time, over more than the last hundred years, these changes had a serious effect on the graphic engineering teaching content. At every following technical period, the educator had to “trim” the “old” knowledge with providing the optimal conformity of the rest of it with the “new” one within the course credits limits. In this respect, it is important to do, each time, the retrospective analysis of the basic reasons for moving to a novel technology level along with disclosing the trade offs usually inherent in this move. So, it’s not out of place to give some examples for the above listed stages.


3.1 Photomechanical process

The most essential for the first stage was the introduction of the projection halftone screen. This invention has greatly increased the illustrative component in print product, excluding the need in artistic services of engraver for photographs’ reproduction. However, some shortcomings inherent in this technique haven’t been overcome until now. One of them is comprised of the print dots of any kind of halftone (periodic or “stochastic”) destroying the fine detail of a print image (Fig. 2). That continues to keep the definition of the latter about ten times lower of the printing process resolution. The locally adaptive halftoning techniques which artificially copy the engraver skills of representing, for example, the fishing rod by a solid line, to the contrary of the scattered dots of a halftone, aren’t still in wide use [7].


3.2 Electronic reproduction

The basic reason for transitioning from photomechanical reproduction to electronic one was comprised in the intermediate presentation of image data by the electric signal. In spite of relative complexity of the process, as compared to the previous one, it has allowed for the flexible tone and color correction demanded to replace the material wastage masking techniques and costly manual retouching. Nevertheless, as compared to the latter, even the latest color electronic scanners couldn’t provide such correction selectively, for the given image area, while the scanning procedure has also destroyed the continuous input data onto, at least, discrete lines.   

Within the electronic reproduction period one can find the intensive inventory work and patents competition for providing this technology with the functionalities which, being inherent in former camera equipment, were initially sacrificed on behalf of the color correction facilities. So, it seems to be useful to mention below some principle technical solutions concerned of image scaling, electronic halftoning, screen angling, soft proofing

For the scanners operating on analogue signals it was very problematic to vary the image size on a drum in its circumferential direction. So for over a decade low speed “swaying frame” scanning of only 4 lines per second was used to facilitate the sizing of an image in this direction. The problem could be solved just after it became possible to convert in digital form and to place in a buffer memory the image data of, at least, one scan line. In J. Crosfield memoirs [8] one can find the keen story of his Magnascan color scanner launched in the market by agreement with his “friend”, as well as competitor and patent holder, R. Hell [9]. Later on, the interesting technical solution bypassing this patent was used by Dainippon Screen in Scanagraph 701 [10]. The latter became, perhaps, the first sample of this type of equipment having its drums for an original and its copy not fixed on the same shaft and thereby comprising, to the contrary to the former “color scanner” concept, the separate input and output units, currently defined as “scanner” and “imagesetter”.

Halftone image was initially recorded in color scanners through the contact screen placed over the “lith” film having super high contrast due to, so called, infection development. The use of photomechanical screening effect with such film provided “hard” halftone dot. However, its area (tone value) was very dependent on the film exposing and processing stability because of the bell shape exposure distribution in the latent dot image. So, the basic reason for introducing electronic screening was in providing a robust connection of the resulting dot area to the tone value signal. This fact is worthwhile to be mentioned nowadays, because the similar problem is currently being encountered in “laser” printing. The halftone dot exposure profiles on the OPC drums also have the bell, Gaussian shape which results in the same kind of unstable connection between signal and these dots’ areas. That explains the current trend to increase the resolution of toner based printing.

At the times of early electronic halftoning developments, there was a technical problem of providing the proper screen angle for each color separation. That’s why the first color electronic halftones were produced with the use of non-periodic (in modern terminology stochastic) dot placement to avoid moiré. The specific “rational tangent” screen angling was later on used in the first commercial digital screening of Hell’s Chromograph DC300 [11]. Such angling was later once again used in the first Postscript imaging applications because of its higher computational speed.

The first attempt of creating the “soft proof” using TV techniques was implemented in the Toppan Printing auxiliary equipment CP 525. This kind of color adjustments visualization became especially urgent with the use of electronic reproduction technologies. Contrary to manual retouching of an image carrying transparency, the scanner operator had to blindly deal only with multiple potentiometers when setting the masking coefficients for each of the four process inks. Even the low resolution image memory operating at TV frame frequency wasn’t available at that time. So, the first soft proofing systems had to use the pick-up TV camera and to provide the precise modeling of all color transformations stages: from the scanner image capturing to the printing. 


3.3 Computerized print data processing       

This period started when digital techniques became able to operate the image data volumes for the whole page at resolution matching the color print quality level. There were two basic advantages in introducing computer between the image capturing and output units of a color scanner. The first of them was in providing the sophisticated (local) color retouching, seamless combining of the images and other artistic effects in Color Electronic Prepress Systems (CEPS). The other and maybe greater advantage was in the above mentioned facility of text and illustrative data layout on a page and in the following pagination of such data for the whole press sheet.

When discussing this step, it isn’t out of place to recall that the complete presentation of an image by the set of discrete digital samples is inherent in the rather computationally busy interpolative calculations for image sizing or rotation. To the contrary, such functions providing by smooth lens zoom and film/original turning in cameras was no problem 60 years earlier.

The latest step in transition from the closed CEPS to the open – Postscript- based graphic processing environment has provided the flexible remote exchange with text, image, layout, color proofing and other data. These new facilities turn out to be in high demand with the numerous players in the printing business. That’s why transition to the novel way of publishing data communication took place in the beginning of the 90s during a rather short period of time. That resulted, in its turn, in a remarkable deterioration of print quality due to the discrepancy of color values’ interpretation at different production stages and locations. The same color is differently spoken by various inputs (Fig. 3). From the other hand, each output requires its specific input values to reproduce the same colorimetric value. That creates the problem of unified object color interpretation in the open pre-press environment of today. The problem had then to be urgently solved through creation of the International Color Consortium (ICC) for Color Management and its Standards development.

As one of the final evolution steps there should also be noted the convergence of the, earlier separated, graphic transformation data and other processes’ control data into a single file within the concepts of CIP3 (Computer Integrated Prepress – Press - Postpress), CIP4...


4. Summary

Knowledge of the technology evolution is important for the appropriate understanding of its current state and further development potential.

Such knowledge can be effectively mastered when it is based on retrospective analysis of the principle reasons and compromises implicated in transitioning into each next technology generation.

The digital technologies of today allow for the unprecedentedly precise and flexible control of graphic print data parameters. However, contrary to former times, this involves more intensive research and training for the adequate use of these facilities’ potential.



[1] E. Giorgianni, T. Madden, Digital Color Management: Encoding Solutions, Addison-Wesley, Reading, Mass., 1997.

[2] Y.V. Kuznetsov, High definition halftone printing – HDHP: background, current status and challenges of the adaptive screening developments, IARIGAI proc. vol. 35, pp. 323-331, 2008.

[3] Y.V. Kuznetsov, Pre-press image processing. Screening (handbook), Mir Knigi, Moscow, 1998 (Russ.)

[4] “Premiere of Helio Data Processing at Hell’s”, EPI, N5, p. 29, 1978.

[5] Y.V. Kuznetsov, Does some philosophy still exist for the halftone frequency selection?, Proc. of IS&T’s Int. Conf. on Digital Printing Technologies, Orlando (USA), Oct.17-22, pp. 362-365, 1999.

[6] E. Enoksson, Image reproduction practices, TAGA Proceedings, pp. 318-331, 2004.

[7] Y.V. Kuznetsov, Adaptive Screening Developments at the Graphic Technology Department, Papers of the 42th Conf. of Int. Circle, 19-20 Oct., Moscow, pp. 69-74, 2010.

[8] J.F. Crosfield, Recollection of Crosfield Electronics. 1947 – 1975.

[9] R. Koll, F. Zeyen. Patent DE 1193534, appl. 27.12.1963.

[10] F. Hatayama, T. Tanaka. Patents DE 3431482, GB 21455896.

[11] U. Gast. Method and app. for recording rastered continuous-tone pictures in printed graphics. US patent 3725574.  

Fig. 1. Initial division and final convergence of processing text and image for a print forme on its way from the manual engraving to desktop technology of today.

Fig. 2.  The screening has excluded the need in artistic services of engraver for photographs reproduction. However, the image fine detail stays until now destroyed by the halftone dots (a). The locally adaptive halftoning techniques [7] which simulate the engraver skill of representing the thin line of an original by an ink solid (b) aren’t still in wide use.

Fig. 3. The same color is differently spoken by various inputs creating the problem of unified object color interpretation in the open pre-press environment of today.

Author biography

Professor Yuri V. Kuznetsov has a Ph.D from the Leningrad Bonch-Bruevich Institute of Electric Communication. Until 1982 he worked there as a team leader in the Graphic Arts Laboratory on the development of electronic reproduction systems. His Dr. Sc. degree in Printing Machinery and Technology was received from the Moscow State University of Printing. Currently he is the Chairperson of the Graphic Technology Department of the North-West Institute of Printing at the St. Petersburg State University of Technology and Design where he teaches courses and conducts research in Pre-press Image Processing, Print Quality and Color Management. He holds over 20 patents, has written over 30 papers on topics related to imaging in printing and has authored three books. The last of them are "Image Processing Technology in Printing," (2002) and “Introduction in problems of color communication” (2011). The basic results of his scientific research are implemented in the High Definition Halftone Printing (HDHP) technology using the patented algorithms of the locally adaptive print image encoding.