Below is a list of current speakers, with more to be announced in the coming weeks.
Michael (MJ) John
3M
Print quality management faces unprecedented challenges—from global color consistency to rapid tech shifts. This session explores the Top 10 issues confronting today's print managers and how emerging technologies like AI, automation, and cloud workflows can eliminate these pain points, ensuring brand integrity and efficiency in a fast-changing print landscape.
Marek Skrzynski
CSW
White ink represents a significant cost in flexographic printing and has long suffered from mottle and pinholing that reduce print quality and efficiency. The development of microcell surface patterns on photopolymer plates improved ink laydown, reduced defects, and increased opacity at lower ink volumes, with later refinements addressing the unique challenges of white ink and spot colors. This paper reviews a decade of microcell innovation, correlating print performance metrics with microcell geometry, imaging resolution, pigment size, and anilox volume across major vendor technologies. The findings demonstrate substantial ink savings, improved opacity and mottle reduction, expanded press stability, and outline a roadmap for future microcell and pigment engineering advancements. Full Abstract
White represents 40% to 50% of the total cost of ink used. For decades, flexographic printers have struggled with persistent white-ink mottle and pinholing, which compromised solid coverage, print quality, and press efficiency (Miraclon, 2022; E.C. Shaw, 2019; Skrzynski, 2014, 2015). To mitigate these issues, many resorted to screening solids with high tonal values (97–98%) and fine halftone rulings (150–200 lpi) in place of solids. These workarounds often sacrificed opacity, increased workflow complexity, and provided only partial and inconsistent relief (Harper, 2015; Skrzynski, 2015).
The introduction of microcell surface patterns on photopolymer flexo plates opened a new frontier. By engineering microscopic geometries into the plate surface, printers altered ink-wetting behavior, improving ink laydown uniformity, reducing pinhole density, and increasing opacity at lower ink volumes (Miraclon, 2022; E.C. Shaw, 2019; Skrzynski, 2014). Initially targeted at process CMYK inks, these techniques soon proved beneficial for white inks and spot applications (Miraclon, 2022; Skrzynski, 2014). Early solutions, such as Kodak’s NX dimpled-dot architecture, were less effective against white-ink or spot color mottle due to large pigment particle sizes and ink rheology diverging from ideal conditions (Harper, 2015; Skrzynski, 2015).
This paper presents a decade-long retrospective of microcell development, correlating measured mottle metrics, opacity (OP), anilox volume (BCM), and pigment particle sizing (nm scale) to specific microcell sizes (in pixels) and geometries (Miraclon, 2022; E.C. Shaw, 2019; Skrzynski, 2015). We show that mismatches, such as selecting inappropriate cell sizes (in pixels) or repeat structures (e.g., checkerboard, groove, asymmetrical, etc.), can be counterproductive (Miraclon, 2022; Skrzynski, 2014). We also examine how advances in laser imaging (2400 to 4000 dpi) and the shift from film-based to digital photopolymer (LAM carbon mask) workflows expanded the design space for microcell engineering (E.C. Shaw, 2019; Esko, 2025). The analysis includes multiple vendor approaches: HYBRID, Esko (XPS/Crystal/Q-Cells), XSYS (“Woodpecker”), Miraclon NX (ex-Kodak), and Hamillroad’s Bellissima DM screening (Miraclon, 2022; Harper, 2015).
Key results previewed include:
• Over to 37% reduction in required white ink anilox volume (from historical 10-7 BCM to new 5-4.5 BCM) while maintaining or improving opacity (OP) above 55 (Hamillroad, 2025).
• SID increases of about +0.15 to +0.20 for process CMYKOVG along with reduced mottle and suppression of secondary patterns in Expanded Color Gamut (ECG) printing.
• ~15-20% ink savings with high-frequency surface pattern and improved print sustainability
• Mottle index reductions of 99% in white ink compared with legacy methods.
• Expanded press stability windows, with drying times reduced and significant run speeds increased (in some cases doubled) enabled by new microcell/ink/tape/plate combinations.
• Practical lessons on the interplay among pigment particle size (ranging from 64nm–300nm), imaging resolution, and microcell geometry.
By framing this as a retrospective, the paper documents the evolution of microcell design over the past decade and outlines a roadmap for future development, driven by
Antonia Götz 1, Jan Christoph Janhsen 2, Stefan Güttler 1
1 Hochschule der Medien, Stuttgart 2 Fraunhofer Institute for Manufacturing Engineering and Automation IPA
This paper explores inkjet printing as a platform for functional applications such as printed electronics, medical devices, and selective material deposition. While inkjet offers unique advantages in individualization and multimaterial processing, its adoption has been limited by strict viscosity constraints on jettable materials. The study introduces a systematic methodology to characterize high‑viscosity, particle‑loaded functional inks enabled by emerging printhead technologies capable of jetting up to 250 mPa·s. The work aims to link ink rheological properties to waveform parameters, laying the groundwork for expanding inkjet printing into new functional material applications. Full Abstract
Inkjet printing is not only interesting for graphical applications but also offers advantages for functional applications such as printed electronics, selective gluing, and medical uses. However, in these applications, the focus lies on the desired properties of the final print. In the case of dentures, for example, this may be a certain toughness, while for printed electronic structures, it is the ability to conduct electricity.
Currently, these applications are addressed by other printing or additive manufacturing techniques such as vat photopolymerization or screen printing. While these techniques offer a wide parameter range for materials, there are applications that require greater flexibility and individualization during processing or a multimaterial printing approach.
One printing technique that offers a high level of individualization, flexibility, and multimaterial capability is inkjet-based digital printing. It enables rapid on-demand production, requires no printing form, and is contactless. One disadvantage of inkjet printing, however, is its strong limitation in terms of jettable materials. The maximum viscosity for typical industrial piezo inkjet printheads is around 25 mPa·s. Recently, however, a novel printhead technology developed by Quantica GmbH has been introduced, which can jet fluids with viscosities of up to 250 mPa·s. This opens the possibility for a whole new range of functional fluids that could potentially be processed via inkjet.
However, the high particle loads and rheological behavior of functional materials are not yet well researched from an inkjet printing perspective. In this study, a systematic method to characterize these fluids is presented. This includes suitable measurement methods and their validation to determine ink properties relevant to droplet formation. The ultimate goal is to link physical properties such as viscosity, viscoelasticity, and surface tension to waveform parameters.
This work presents the first steps toward identifying suitable characterization methods and potential materials to systematically evaluate the correlation between ink properties and waveform parameters.
Krzysztof Krystosiak 1, Kai Lankinen 2, Martin Habekost 1
1 Toronto Metropolitan University 2 Tampere University of Applied Sciences
This study examines Expanded Color Gamut (ECG) printing as a strategic alternative to traditional spot‑color workflows, building on prior research that demonstrated efficiency and sustainability benefits. To address gaps between simulation and real‑world validation, the research integrates laboratory colorimetric testing with industrial production trials. A calibrated CMYK + OGV flexo configuration is compared against a CMYK + 2 PMS workflow to measure color accuracy, reproducibility, and process efficiency. Empirical print data are combined with Life Cycle Assessment (LCA) to evaluate environmental impacts, establishing a multi‑criteria framework linking print performance to sustainability outcomes. Full Abstract
Expanded Color Gamut (ECG) printing has evolved from an experimental innovation into a recognized strategic approach that enhances both print quality and sustainability. Previous research conducted at Toronto Metropolitan University and Tampere University of Applied Sciences demonstrated that replacing spot-color workflows with fixed-palette systems (CMYK + OGV) can significantly reduce make-ready time, washing solution use, and ink waste (Habekost et al., 2024; Krystosiak et al., 2025; Lankinen, 2021). However, most of these findings were based on simulation and production data, not on controlled laboratory verification. The next phase of our investigation aims to bridge this gap by integrating process-control software and Life Cycle Assessment (LCA) to evaluate both colorimetric accuracy and environmental performance under real-world conditions.
The main objective of this study is to evaluate how expanded color gamut printing performs, both technically and sustainably, when compared to a traditional CMYK + 2 PMS workflow - this configuration represents a typical design setup used in commercial print production, although some designs may include as many as 7–8 spot colors, while others rely solely on CMYK. Two complementary research phases are planned. First, controlled laboratory tests will be carried out at the Graphic Communications Management (GCM) laboratory at Toronto Metropolitan University. These experiments will focus on the colorimetric behavior of both systems. A calibrated seven-color flexo configuration (CMYK + OGV) will be benchmarked against a reference CMYK + 2 PMS setup to quantify achievable color accuracy and reproducibility across multiple print runs.
Second, an industrial partner will conduct full-scale production trials using comparable commercial jobs. This collaboration will enable the collection of real-world process data, including setup time, plate changes, ink usage, washing solution use, substrate waste, and cleaning cycles. These data will then be entered into a comprehensive Life Cycle Assessment (LCA) comparing both printing setups. The LCA will evaluate global warming potential, cumulative energy demand, and volatile organic compound (VOC) emissions in accordance with ISO 14040/44 standards. By integrating empirical color data with environmental metrics, the study establishes a multi-criteria evaluation framework that links process control to sustainability outcomes.
Sean Garnsey
Enova Concepts, LLC
Printing is a multidisciplinary, evolving technology with applications extending beyond traditional graphics into areas such as printed electronics. The study documents a collaboration with NASA and the University of Texas at San Antonio to develop printable materials and processes for lunar missions, combining established print practices with electronic performance requirements. The research identifies materials, workflows, and testing methods that enable printed components to tolerate extreme temperature swings and integrate with aerospace alloys. The work highlights new opportunities for the graphic communications industry to expand into space manufacturing and advanced printed electronics. Full Abstract
The history of printing technology has been defined by disruption, adaptation, and the continual evolution of a family of highly specialized manufacturing systems to meet the needs of mankind. Printing is inherently multidisciplinary, uniting mechanical, chemical, and electronic systems to transform patterns into products. Today—perhaps to the chagrin of those raised within the graphics industry—the term printer has been widely adopted across science and engineering to describe machines that fabricate patterned materials. This broader use opens new opportunities for the traditional graphic printing industry to extend its expertise into advanced technologies such as printed electronics.
In this work, we describe the process steps undertaken by Enova Concepts in collaboration with NASA and the University of Texas at San Antonio to develop a materials set and printing process for lunar missions. Many of the same development stages familiar to graphic printing — ink synthesis, compatibility, rheology, formulation, metrology, and durability testing — remain central, with the addition of electronic considerations such as film continuity, conductivity, and stability under extreme conditions. We outline candidate materials, process flows, and characterization methods that enable functional printed electronics to survive in the harsh environment of space. Key outcomes of this effort is the development of a printable insulation layer capable of surviving temperature swings of -180C to +120C, integration with aerospace alloys (Al 6061 and Ti-64), and compatibility with multifunctional sensor materials which would enable printable radiation detectors.
This emerging application highlights a unique opportunity for printing companies and university programs: to leverage their expertise in inks, substrates, and process optimization to enter the expanding field of printed electronics and space manufacturing, carrying forward the tradition of printing as a technology of adaptation and innovation.
Chris Yi-Ho Bai
BenQ Corporation
This paper investigates perceptual color matching differences between CCFL‑ and LED‑backlight displays when used for soft‑proofing against color‑managed prints. Through controlled psychophysical experiments, it demonstrates that CCFL displays are perceived as more closely matching prints, while LED displays tend to appear overly saturated despite equivalent colorimetric calibration. To address this discrepancy, the study introduces observer correction matrices integrated into the ICC profile architecture to compensate for perceptual differences on LED displays. Validation through expert observer evaluations and ISO 14861 criteria shows that these corrections significantly improve both visual matching and objective colorimetric accuracy, offering a practical solution for modern color‑critical workflows. Full Abstract
This research investigates methodologies to improve perceptual color matching performance between CCFL-backlight displays and LED-backlight displays when evaluated against color-managed prints in professional soft-proofing workflows. Two sequential experiments were designed and conducted to address this objective. The first experiment systematically evaluated the influence of display backlight technology on perceptual color matching performance relative to color-managed prints through a comprehensive psychophysical study involving expert observers. The second experiment developed and validated an observer correction matrix methodology applied to LED-backlight displays to improve perceptual color matching performance, with validation conducted through both psychophysical evaluation and quantitative colorimetric analysis based on ISO 14861 soft-proofing standard criteria.
All experiments were conducted in a dark room environment under strictly controlled viewing conditions with neutral gray surroundings to minimize the influence of ambient light and flare on observer judgments. Both CCFL and LED displays were hardware-calibrated using spectrophotometer to identical colorimetric targets: Adobe RGB color gamut, 160 cd/m² white luminance, CIE Standard Illuminant D50 white point chromaticity, 0.5 cd/m² black level, and L* tone response curve. The calibration targets were precisely matched to the colorimetric parameters of the reference color prints, which were evaluated under standardized viewing conditions in an X-Rite Judge QC viewing booth equipped with D50 illumination at 2000 lux.
Historically, CCFL displays have been strongly favored in professional color-critical applications including prepress, photography, and graphic design workflows due to their superior spectral stability, relatively continuous spectral power distribution, and consistent neutral grayscale rendering performance. While previous studies have extensively compared the spectral characteristics and metamerism indices of CCFL and LED backlights from a purely colorimetric perspective, relatively few studies have systematically evaluated their perceptual color matching performance relative to color-managed reflection prints through structured observer studies employing experienced color evaluation professionals. The first experiment addresses this research gap by providing empirical evidence regarding the perceptual aspects of color matching performance as influenced by different backlight display technologies under identical colorimetric calibration conditions.
In the first experiment, a two-part paired comparison study was designed in which observers evaluated two experimental conditions: CCFL backlight display compared to reference prints, and LED backlight display compared to identical reference prints. Eleven carefully selected test images were incorporated into the experimental protocol: nine chromatic images representing critical color categories (red, green, blue, cyan, magenta, yellow, skin tone, neutral colors, and multi-chromatic blended compositions) and two achromatic grayscale gradient images spanning the full tonal range. Seven observers with ten years or more of professional color judgment experience in printing, prepress, or display calibration fields were recruited to participate in the first experiment. Observers were systematically asked to evaluate three perceptual attributes: color similarity (degree of color match between display and print), saturation appearance (whether display appeared more or less saturated relative to print), and overall preference (which display provided more acceptable soft-proofing appearance) for each image pair.
Results from the first experiment demonstrated that the CCFL backlight display exhibited colors that were perceptually more similar to the reference prints and received significantly higher preference ratings from expert observers. In contrast, the LED backlight display consistently exhibited colors that appeared more saturated than the corresponding prints across the majority of chromatic test images. This finding is particularly significant because the colorimetric calibration results of the two displays were nearly identical within measurement uncertainty, yet visual judgments by observers revealed substantial perceptual differences. Notable inter-observer variation was also observed even among this group of highly experienced color evaluation professionals, suggesting that individual observer differences play a meaningful role in soft-proofing color assessment.
Based on these findings from the first experiment, the second experiment was specifically designed to compensate for and correct the visual discrepancies between the LED backlight display and reference prints by incorporating an observer correction matrix within the ICC profile architecture. The observer correction matrix was established through a systematic visual matching procedure performed between the LED backlight display and reference prints. Individual observers were presented with color patches displayed on the LED backlight display and were asked to interactively adjust the hue angle, saturation magnitude, and lightness level of each displayed color using a graphical user interface until they achieved their perceived optimal visual match with the corresponding printed color patch viewed simultaneously. Once observers completed the adjustment procedure for all target colors (white, red, green, blue, cyan, magenta, and yellow), representing the primary and secondary corners of the RGB color space, the adjusted colors were measured spectrophotometrically using a calibrated spectrophotometer, and individual observer correction matrices were computationally generated based on transformations derived from CIE 1931 2° Standard Observer color matching functions.
Forty-five observers with varying levels of color expertise participated in the second experiment, and substantial variation in individual color matching preferences and adjustments was observed across the observer population. This variation reflects the well-documented phenomenon of observer metamerism and individual differences in color perception. To facilitate practical implementation in commercial product design and to develop a generalizable solution rather than requiring individualized calibration for each user, it was necessary to categorize the 45 individual correction matrices into a manageable number of representative observer classes. Therefore, K-means clustering analysis, an unsupervised machine learning classification method, was utilized to establish three categorical observer classes based on similarity of correction matrix parameters. The optimal number of clusters (k=3) was determined through analysis of within-cluster sum of squares and silhouette coefficients.
In the validation phase of the second experiment, six different observer correction matrices were systematically applied to the displayed images on the LED display: CIE 1931 2° Standard Observer matrix (baseline condition representing conventional colorimetric calibration), individual observer-specific matrix (personalized correction for each participant), population average matrix (mean correction across all 45 observers), and three K-means cluster-based matrices representing the three identified observer classes (Cluster 1, Cluster 2, and Cluster 3). Observers then evaluated all six sets of corrected displayed images in comparison to the corresponding reference prints under identical viewing conditions. In the evaluation process, seven single-color patches (white, red, green, blue, cyan, magenta, and yellow) were systematically assessed, and observers were asked to select one correction matrix category for each color patch that provided the best perceptual match to the print through a forced-choice comparative judgment methodology.
Results from the validation experiment indicated that individual observer-specific matrices and K-means Cluster 1 matrices demonstrated statistically superior matching performance compared to the CIE 2° Standard Observer matrix baseline condition across the majority of test colors. These findings provide strong evidence that observers prefer the application of observer correction matrices to LED backlight displays for soft-proofing purposes, and that clustered observer categories can provide improved performance over standard observer-based calibration while maintaining practical implementability. To provide objective quantitative support for the psychophysical experiment results, a complementary validation study based on ISO 14861:2015 soft-proofing standard criteria was conducted.
An observer correction matrix based on a representative set of individual color matching preferences was formally incorporated into the ICC display profile structure through modification of the colorimetric transformation tables. This modified ICC profile was tested against ISO 14861 compliance criteria using standardized test targets. Application of the observer correction matrix demonstrated measurable and statistically significant improvements in both average ΔE values and maximum ΔE values when comparing the LED display soft-proof to reference prints.
The two experiments conducted in this research collectively demonstrate that perceptually significant differences exist between CCFL backlight and LED backlight displays, even when both are calibrated to identical colorimetric targets using hardware calibration with spectrophotometer. These findings were consistently confirmed by experienced color evaluation experts in the first experiment, establishing the practical relevance of this phenomenon in professional workflows. By applying observer correction matrices through standard ICC profile architecture, both visual perceptual differences and objective colorimetric measurement differences (ΔE values) showed substantial and statistically significant reduction. This research indicates that it is both technically feasible and practically beneficial to improve soft-proofing performance between LED backlight displays and color-managed prints using existing ICC profile infrastructure, providing an implementable solution for contemporary display technologies in professional color-critical workflows without requiring fundamental changes to established color management systems.
Celeste M. Calkins and Calista J. Smith
Illinois State University
Profitability in print manufacturing is challenging because jobs are priced before production, leaving little room to adjust for production cost variability, despite industry profit margins averaging below 5%. Cutting table estimation is a recurring problem area due to differences in materials, tool speeds, design complexity, and geometry, with current methods relying on cut length and subjective complexity factors. Prior research suggests that cutting time may be more strongly influenced by the number of tool direction transitions rather than cut distance or design size. This study explores whether more robust time standards and potentially AI‑assisted estimation of transition points can improve cutting time accuracy, even when final artwork is not yet available. Full Abstract
In an industry where we produce product upon order, profitability can be a major challenge. In most manufacturing, you sell a product often sitting in inventory, meaning as supply decreases or material costs increase, the price changes to guarantee a profit. This change may happen on a day to day or week to week basis. In print, we estimate a job, often prior to even seeing artwork, and then are held to that estimate. It is not until after all production is complete that we can truly determine if that job turned a profit for us. Profitability is necessary for a company to survive. For printing companies in the United States, data suggest that the average profit margin was below 5% for 2024 (IBIS World, 2024). Similar figures may exist beyond the U.S. borders. Therefore, the need to ensure profitability, accuracy, and consistency in job estimation is essential. In communicating directly with companies across the U.S., one asset/cost center that many struggle with is cutting table estimation. This is due to a variety of reasons but includes: the speed differences across cutting and scoring tools, the material being cut (e.g., paperboard, corrugate, wood, metal), the intricacy and complexity of the designs, and the shape of and the sizing of the designs. In some situations, larger designs cut faster than smaller designs due to the smoothness of the shapes and ease of cutting/scoring tool transitions (Calkins & Jacob, 2025). One thing brought forward in prior research was that perhaps the number of direction transitions for the cutting/scoring tools would be a better predictor of cutting time than any of the aforementioned reasons.
Cutting Time Standards
Currently, there are trainings available within the printing industry on cutting table technology estimation. Those trainings base estimation for cutting tables on the length of the cut, the number of cuts needed (in the event of multiple products), speed of the cut tool, and then a complexity factor subjectively applied by the estimator. Interestingly, the length of the cut only accounts for the exterior cut dimension based on an equation of:
𝐶 = 2𝜋r or 𝐶 = 𝜋𝑑
In the case of unique or complex shapes, these equations do not accurately represent actual cut lengths. Further, if there are any interior cuts, those are only accounted for by applying a higher complexity factor within the estimate.
Expanding on prior research, (Wilson & Carey, 2020; Calkins & Jacob, 2025) this research will dive into how aspects beyond the size, shape, and intricacy/complexity of a design play into the time necessary to complete the cut(s). Prior research results (Calkins & Jacob, 2025) suggest that perhaps the distance of the cut is less important than initially thought, but rather the number of times the CAD table would need to stop and adjust cut direction could be a better indicator of cut time. The challenge in this, is when estimating a job, a company often does not have artwork already available. If given a rough idea of the product/concept, could AI help in estimating the number of transition points within a design? Through the testing of different designs and substrates, we hope to address the following questions:
1. Can we create more robust time standards that could serve as a proxy for estimating cutting table jobs that are very seldom typical?
2. Can the number of transition points be used as a better estimation technique than complexity factors within given estimating practices?
3. Can AI help in estimating the number of transition points in a job to allow for estimation of the job prior to obtaining the official artwork?
John Seymour 1, William Pope 2 and Bruce Leigh Myers 2
1 John the Math Guy, LLC and 2 RIT
This paper examines the lack of objective methods for measuring the perceived metallic appearance of premium metallic inks used in packaging. While prior research has linked ink film thickness to spectrophotometric measurements, these metrics do not necessarily capture the visual qualities that make metallic inks visually appealing. The paper reviews existing measurement instruments and common metrics for metallic appearance, highlighting both their effectiveness and limitations. It proposes new image‑based, BRDF‑derived metrics—specularity volume and specularity width—as perceptually relevant measures for quantifying metallic appearance. Full Abstract
A premium is paid for metallic ink, particularly in the packaging market. The metallic appearance of a product projects luxury and quality, and the fact that the metallic glint is very bright and moves as the customer walks past is thought to draw attention to the package. But despite the added expense of metallic inks, little attention has been paid to measuring the degree of metallic appearance in the print industry.
Measurement of metallic inks has been studied by adapting existing instruments and techniques, for example, four TAGA papers [Mannig 2002, Habekost and Andino 2016, Habekost and Ma 2017, and Habekost and Ma 2018]. These papers have identified a strong correlation between ink film thickness and measurements made using the M3 (polarized mode) of a 45:0 spectrophotometer. This is a useful metric for assuring consistent ink film thickness which we can control on press, but this may or may not assure that the metallic appearance – the very attribute that the premium is paying for – has been optimized.
This paper starts with an explanation of what we perceive as metallic appearance and the underlying physics that leads to an object appearing metallic. This explanation is bolstered by the exposition of a very simple device for viewing the angular pattern of reflected light in a goniogram. A simple hypothesis is suggested for what we perceive as metallic appearance.
This is followed by an explanation of the various devices that have been used to provide a metric for metallic appearance, and an explanation of the variety of metrics (with obscure names!) that have been used to quantify metallic appearance, including distinctness of image, gloss, haze, reflected image quality, RSpec, and SPI. Emphasis will be put on why these methods work and expected weaknesses of the metrics.
These metrics are all based on devices with a small collection of point detectors – detectors that measure light that reflects from the sample at a small collection of angles. The advent of inexpensive imaging technology leads to instruments capable of collecting light reflected at thousands or millions of angles, thus providing a portion of the entire BRDF (Bidirectional Reflectance Distribution Function). The suggestion is made for a new pair of metrics that presumably will correlate closely with our perception of metallic appearance. The two metrics represent two attributes for metallic appearance: specularity volume (which is the volume under the specular peak) and specularity width (which is the width of the Gaussian that fits the specular data). These are both measures of specularity, the ability of a surface to behave like a mirror. According to the postulated theory, a threshold level of both of these is required for us to perceive metallic appearance.
Dr. Hanno Hoffstadt
GMG Germany
This paper addresses the challenge of accurate color conversion in multicolor printing, where conventional ICC table‑based methods become impractical due to exponential growth in table size. It introduces a subspace‑based data format and transformation engine that stores only key ink combinations while estimating missing combinations with controlled interpolation and correction stages. The approach supports high accuracy, reduced memory usage, and flexible conversion to Lab, spectral, or device color spaces—even when overprints involve more than four inks. The method enables scalable color management for expanded‑gamut and spot‑color workflows without the discontinuities or storage limits of earlier solutions. Full Abstract
Color conversions for printing are usually done with conversion tables which are stored in color profiles (like ICC profiles) and performed by a transformation engine. Tables are organized as a complete multi-dimensional grid which allows for fast interpolation. For CMYK printing, this is well established. For example, conversions from CMYK to CMYK often use tables with 17 steps per ink, which is enough resolution to sample the behavior of nonlinear processes. Such tables have 174 entries. Each entry has 16-bit values per output channel, in this case 8 bytes, so that the complete table occupies 650 KB.
In multicolor printing, about 2–4 inks are typically used in addition to CMYK (custom spot colors or extra process colors). Customers expect the same accuracy for the CMYK part as before, which is a challenge due to the exponential size of the table grid. Even if only 10 steps are used for the additional inks, the number of entries increases by a factor of 102 to 104, which takes too much memory and computation time.
Our goal was to find an approach for color conversions with small table sizes, high accuracy, support for many inks, and the ability to transform any input ink combination to Lab, spectra, or other device color spaces.
We achieved this goal with a new subspace-based data format and corresponding transformation engine. Only important ink combinations are stored, but as complete subspace tables with fast interpolation. The engine can estimate results for ink combinations which are not available. Different methods are available for Lab, spectral and device link values. To avoid uncontrolled extrapolation, correction stages can adjust estimations to known bounds.
Often, a color is separated with not more than 4 inks, and some combinations can be excluded. ECG printing with CMYKOGV inks might use only CMYK, OMYK, CGYK, and CMYV or CMVK. Therefore, earlier solutions relied on such separations for special cases and needed to store only a few 4-dimensional tables [1]. Later, an iccMAX example was given for CMYK using multiple 3-d tables and one two-step CMYK table [2]. However, the selection mechanism caused unwanted jumps at transitions from 3-d to CMYK.
Also, it cannot be guaranteed that input data for transformations are restricted to known subspaces. Resampling of image data and contour trapping can create areas with more than 4 inks. In proofing, registration marks occur typically in all inks. The transformation engine must provide a decent estimate of such overprints even though they are not stored in a table.
Our storage concept uses an arbitrary collection of subspaces, where each ink has to occur at least in one subspace and must have the same table steps in all subspaces where it appears. For example, for a use case with CMYK+2 spot colors X and Z, we could have 3 tables: CMYK (4-d), X (1-d), Z (1-d). But if overprints with black are also known (like in the HKS 3000 swatch books), we could have CMYK, X+K (2-d), Z+K (2-d), where K must have the same steps in all three tables.
Our transformation engine examines the input ink values. The non-zero inks form the subspace of the input. If a table exists, the result is interpolated and returned as a authoritative answer. If not, an estimation must be calculated. Earlier methods required at least known corners of overprints [3], which are often not available (consider X and Z, where neither X+Z nor X+Z+CMYK is known).
So our transformation must be able to work without this information. Our estimation works by splitting the inks into two sets, one with a known result, and another (preferably with a low contribution) whose effect on the output values can be determined from a containing table. The estimator then applies the effect on the known result and returns it as an estimation.
For example, the effect of a percentage P of X could be determined from a containing table (1-d X, or 2-d X+K above) from the result with P and the result where P=0.
Our correction concept allows for multiple stages. The first stage has a set of subspace tables with high resolution (many steps per ink) but lower dimensionality. The optional following stages have a different set of subspace tables with lower resolution but higher dimensionality. If the transformation gets an authoritative answer at a stage, it stops. An estimated result can be taken to a next stage and be corrected according to an interpolation in those tables.
In this way we can use known n-dimensional corners, e. g. from our spectral prediction model, and adjust the estimation to predefined results.
The presentation will provide additional details and examples.
Prof. Dr. Volker Jansen
Hochschule der Medien, Stuttgart
This paper examines the sustainability challenges of multilayer flexible packaging (MFP), which relies on fossil‑based polymers and contributes significantly to global plastic waste due to limited recyclability. Although MFP delivers critical functional benefits, its composite structure complicates sorting and recycling, leading most post‑consumer film waste to be landfilled or incinerated. The study reviews emerging solutions including mono‑material designs, delamination, selective dissolution, and compatibilization techniques to improve material recovery. It concludes by outlining a roadmap for advancing MFP recycling while balancing performance and environmental responsibility. Full Abstract
Synthetic polymers used in the production of plastic packaging are largely derived from fossil hydrocarbons. Approximately 42% of synthetic polymers are used in packaging, often as multilayer flexible packaging (MFP) to enhance properties such as migration prevention, thermal stability, strength, and flexibility. These composites are, however, non-biodegradable and accumulate in landfills and the natural environment rather than decompose, with an estimated 30 million tons of plastic waste in oceans and another 109 million tons in rivers, to which MFP contributes significantly. Currently, 97% of post-consumer plastic films are incinerated or disposed of in landfills and water systems, presenting a substantial sustainability challenge for the food and beverage industry.
MFP is typically printed using flexographic or gravure printing and has flourished within a robust packaging printing market. However, this market now faces imminent challenges due to increasing regulations around substrate use and recycling. MFP substrates—comprising materials like polyethylene terephthalate (PET), polyethylene (PE), polypropylene (PP) and aluminum foil, just to name a few—provide essential protective functionalities but also introduce recycling complexities, resulting in low recycling rates for these materials. MFP, which accounts for 45–60% of plastic packaging waste stream, is particularly difficult to sort and recycle due to its composite nature and low consumer recognition of multilayer structures, complicating its integration into existing collection and recycling schemes.
This study examines recent advancements and strategies in MFP recycling, focusing on mono-material approaches, delamination, selective dissolution, and compatibilization processes. Techniques such as delamination, involving the dissolution of adhesives to separate individual layers, and selective dissolution, which enables material recovery with minimal sorting, play a critical role in enhancing recycling efficiency. Additionally, compatibilization methods are applied to improve the mechanical stability of materials within single-stream recycling processes, thereby facilitating more robust and sustainable recycling outcomes.
This paper highlights the urgent need for sustainable MFP recycling solutions and offers a roadmap for future developments in MFP waste management. It emphasizes that while these innovations provide promising pathways, ongoing development is essential for meeting environmental targets. Ultimately, the goal is to create packaging that upholds product integrity, aligns with sustainability goals, and meets the demand for environmentally responsible production and consumption.
Bruce Leigh Myers
Rochester Institute of Technology
Color reflection spectrophotometers are widely used in printing, but uncertainty between different instruments and manufacturers presents a significant challenge for process control and quality assurance. Existing agreement data are limited to same‑model instruments and do not account for user practices, measurement conditions, or inter‑manufacturer variability. This study proposes the use of CCS‑II tiles combined with a Gage R&R methodology to systematically quantify instrument and appraiser variation across a population of spectrophotometers. The approach aims to establish realistic expectations for measurement uncertainty and provide a practical framework for diagnosing the sources of color measurement variation in real‑world print environments. Full Abstract
Color reflection spectrophotometers are used extensively throughout the printing industry for process control and quality assurance applications. It is not uncommon for several models of spectrophotometers from various manufacturers to be used by the myriad stakeholders for a printing job. Often, these instruments are from different manufacturers, and of various states of factory certification. Quantified uncertainty between spectrophotometer models (inter-model agreement) is therefore a real concern for users.
Indeed, the situation is rife with variables beyond the published quantified uncertainty between spectrophotometers of the same type (inter-instrument agreement), which are published by the spectrophotometer manufacturers for specific models when new. There is no known published agreement information for different manufacturers or even different models from the same manufacturer.
Color measurement itself is filled with numerous variables, including user-calibration, measurement technique, backing, colorimetric choices, measurement condition, and variability of the sample measured, which is related to the variation in instrument apertures. Practitioners facing color measurement uncertainty concerns are frequently met with costly factory certification and instrument replacement advice from spectrophotometer manufacturers. Such advice presumes that variation based on user-defined settings and measurement technique is held constant.
Further, some manufacturers and third parties have introduced ink-on-paper-based schemas which purport to allow users to better control for instrument uncertainty.
When inter-instrument agreement information is published by spectrophotometer manufacturers, the data are frequently based on an individual instruments’ variation from a “master instrument” when reading CCS-II tiles (alternatively known as BCRA or Lucideon tiles). CCS-II tiles offer several advantages for this purpose, as they can be traceable to NIST, easily cleaned, of known surface qualities, and are less susceptible to thermochromatic changes than ink-on-paper.
This research examines a procedure for using CCS-II tiles and a population of spectrophotometers using Gage R&R (repeatability and reproducibility) analysis to inform the conversation about color measurement instrument uncertainty. As Gage R&R has been a widely used technique as part of Measurement Systems Analysis (MSA) strategies since the 1990s, and CCS-II tiles have been the benchmark in the manufacture of many spectrophotometers, it is curious that few managers of populations of instruments have adopted this technique in their own facilities.
Using a population of spectrophotometers and CCS-II tiles, a Gage R&R study will be conducted with multiple appraisers, with primary goals including identifying the total expected variation due to differences in the instruments and differences in the appraisers.
The resultant data will inform a capability analysis, and decisions about determining realistic uncontrolled variance given the population of instruments. In addition, the procedure will establish a procedure that enables a systematic analysis of to determine the likely cause future readings beyond the established capability, namely, as variation resulting from the device or the appraiser. Other useful data may include potential problematic colors for the population of spectrophotometers examined.
Carolina Suárez Vallejo 1, 2 Edgar Dörsam 2 and Hans Martin Sauer 1
1 Sun Chemical GmbH 2 Technische Universität Darmstadt
This paper investigates adhesion and delamination behavior in multilayer flexible packaging structures, focusing on EB‑cured inks used with different substrates. It addresses limitations of conventional adhesion testing by developing a quantitative framework linking surface treatments, surface energy, roughness, and lamination bond strength. Using controlled surface pre‑treatments, advanced analytical techniques, and T‑Peel testing, the study examines how substrate properties and adhesive chemistry influence delamination mechanisms. The findings aim to improve predictive understanding of laminate performance, enabling more reliable and durable flexible packaging designs. Full Abstract
The flexible packaging industry is experiencing significant growth, driven by consumer demand for lightweight, convenient, and resource-efficient products. This expansion needs a parallel increase in the supply of high-performance inks and materials, particularly for multilayer laminated structures. The lamination of these structures is a complex process often compromised by issues related to adhesion, optical defects, and the long-term integrity of barrier properties. These laminates are primarily based on substrates such as Bioriented Polypropylene (BOPP) and Polyethylene Terephthalate (PET). A key technological advancement in this sector is Electron Beam (EB) curing, which offers a sustainable and efficient method for flexible packaging processing.
However, the integrity and performance of these multilayer laminates critically depend on the lamination bond strength. Despite its crucial importance, the industrial evaluation of adhesion is often limited to simple lamination force measurements and qualitative tape-peel tests. These methods are insufficient for a comprehensive characterization of the delamination process and for accurately predicting the effects of material properties or processing conditions.
The cornerstone of successful lamination is robust interfacial adhesion. Flexible polymeric films possess low surface energy, which inherently hinders the wetting and bonding capabilities of solvent-free adhesives. This research first examines the superficial effect of Corona and EB Radiations as surface pre-treatment methods, on the most common packaging films. Then, the study quantifies the relationship between treatment energy, surface energy increase (measured via contact angle analysis), roughness and the resulting bond strength achieved with different adhesive chemistries. It is important to consider that the treatment levels follow the typical industrial conditions to avoid substrate degradation or “over-treatment,” which can negatively impact lamination quality.
This research aims to develop a robust methodology for understanding the fundamental phenomena governing the delamination of flexible polymeric materials, specifically those laminated with EB-cured offset inks. The study seeks to move beyond simple force measurements and establish a scientific framework that allows for the quantitative interpretation and prediction of delamination behavior.
The primary substrates under investigation are chemically or Corona treated PET films and Corona treated BOPP films, all printed with EB-cured offset inks. The secondary substrate in all cases is a Corona treated PE film.
This investigation is guided to elucidate the following fundamental, practical questions: (i) How can the lamination forces of the composite structures subject to this research be improved? (ii) How do substrate properties influence the delamination mechanism in these multilayer systems?, and (iii) Is it possible to establish a quantitative relationship between the qualitative tape-peel test and the measured lamination bond strength?
This research employs a multi-faceted approach that combines experimental testing with state-of-the-art physical modeling. The experimental phase involves the controlled preparation of laminated specimens and the characterization of their surfaces using techniques such as GC-MS headspace, pyrolysis-MS, IR spectroscopy, confocal microscopy, and DSC. The T-Peel delamination test, conducted via an automated tensile tester, will be the primary method for quantifying lamination force.
As an example, from the perspective of surface chemistry, Figure 1 presents the effects of corona and EB irradiation on the surface of BOPP, determined by GC-MS. The oxidative effect of this type of ionizing radiation is notable, as the surface acquires chemical groups that promote interaction with adhesives and inks. The purpose of this investigation, therefore, is to elucidate the magnitude of this effect of changes in surface chemistry.
Lukas W Jenner
Hochschule der Medien, Stuttgart
Adhesives are essential to modern manufacturing, offering functional and design advantages over traditional joining methods across numerous industries. However, the global adhesive market relies heavily on petrochemical-based materials, resulting in substantial environmental impacts related to resource depletion, emissions, recycling challenges, and persistent waste. While bio‑based and biodegradable adhesives have emerged as alternatives, they also present environmental trade‑offs across their life cycles. This study reviews the ecological burdens of both synthetic and bio‑based adhesives and explores pathways toward more sustainable adhesive technologies. Full Abstract
Adhesives are omnipresent in contemporary society and are utilised in a wide range of industrial and consumer applications. Their ability to bond different materials efficiently and permanently has made them an indispensable component of modern manufacturing processes. When compared with alternative joining techniques, such as welding, riveting, or mechanical fastening, adhesives offer distinct advantages in terms of efficiency, design flexibility, stress distribution, corrosion resistance, and overall weight reduction (Packham, 2009; Eisen, Bussa & Röder, 2020). The fields of application are correspondingly diverse, encompassing sectors such as packaging, construction, medicine, aerospace, electronics, and the graphical industry, where adhesives play a crucial role in ensuring functionality and performance.
According to recent market analyses, the global adhesive industry continues to expand steadily. The market size in 2024 is estimated to be approximately 69 billion USD, making it one of the most significant material markets worldwide (Mordor Intelligence, 2025). This economic relevance is mirrored in production volumes: in 2022, the global production of adhesives reached roughly 13.5 million tons, with the vast majority derived from petrochemical sources (Ceresana Market Research, 2025). The synthesis of these synthetic adhesives typically involves the use of petroleum-based polymers, a finite and non-renewable resource, and they are frequently non-biodegradable. Consequently, they contribute to long-term environmental concerns, such as the accumulation of microplastics and persistent organic residues in terrestrial and aquatic ecosystems (Eisen, Bussa & Röder, 2020; Onusseit, 2006; Zhang, Gao, Kang, Shi, Mai, Allen & Allen, 2022).
In addition to end-of-life issues, the production phase of petrochemical adhesives is characterised by high energy consumption and significant greenhouse gas emissions, thereby amplifying their overall ecological footprint (Packham, 2009). Furthermore, environmental implications are also observed during recycling processes. For instance, in the context of paper recycling, adhesive residues have been observed to induce operational challenges. These residues have been shown to increase water demand during the flotation process and to form agglomerates that have the potential to clog machinery. This phenomenon has been termed “stickies” (Habenicht, 2009, pp. 763–764).
In view of the mounting global focus on sustainability and resource efficiency, these challenges have given rise to a search for bio-based and biodegradable alternatives. Nevertheless, even bio-origin adhesives are accompanied by environmental trade-offs in terms of raw material sourcing, processing energy, and end-of-life behaviour.
The present study systematically examines the environmental problems associated with adhesives of both petrochemical and biological origin. The objective of this study is to provide a comprehensive overview of the ecological burdens along the adhesive life cycle, from raw material extraction to disposal, and to identify possible pathways towards more sustainable adhesive technologies.
Aileen Chiu
Sun Chemical
This paper examines the use of an Extended Color Gamut (ECG) workflow for Direct Food Contact (DFC) printing, where strict safety regulations limit pigment choice and achievable color gamut. Building on prior work that identified 1,801 reproducible Pantone colors under DFC constraints, the study evaluates a CMYKOV ink set for sheetfed printing. Results show that while Orange and Violet inks expand the gamut beyond standard CMYK, regulatory pigment restrictions introduce challenges such as reduced vibrancy, hue shifts, gray balance casts, and undefined density targets. The findings help printers and brand owners balance color performance, regulatory compliance, and sustainability in DFC packaging workflows. Full Abstract
Direct Food Contact (DFC) printing plays a critical role in food packaging, where inks must comply with stringent safety regulations to prevent contamination. These regulatory and formulation constraints significantly limit pigment selection, which in turn affects the achievable color gamut. For brand owners and printers, this creates a challenge: how to maintain color fidelity and visual appeal while ensuring consumer safety and sustainability.
In a prior study, we quantified the color capabilities of a compliant ink set by mapping achievable Pantone Solid Coated digital colors within a tolerance of CIEDE2000 < 2. This analysis revealed 1,801 colors that could be reproduced under DFC conditions, providing a baseline for understanding the limitations imposed by pigment restrictions. Building on this foundation, and with sustainability objectives in mind, we explored whether an Extended Color Gamut (ECG) workflow could enhance color reproduction without compromising compliance.
This paper evaluates the CMYKOV (Cyan, Magenta, Yellow, Black, Orange, Violet) ink set formulated for Direct Food Contact applications in sheetfed printing. It examines CMYKOV’s ability to extend the chromatic range relative to the characterized reference printing condition CRPC-6 of ISO/PAS 15339, while recognizing its limitations compared to the broader Pantone Solid Coated library used in packaging printing. Although the addition of Orange and Violet inks expands the gamut beyond conventional CMYK, regulatory pigment restrictions introduce significant challenges. These include reduced vibrancy, hue shifts compared to standard CMYK process printing, and a noticeable color cast in neutral gray areas. In addition, the absence of established density specifications for this ink set complicates process control.
The findings offer practical insights for color engineers, prepress specialists, and food packaging printers seeking to balance safety, sustainability, and color performance. By understanding the trade-offs inherent in DFC-compliant ECG workflows, stakeholders can make informed decisions about ink selection, color management strategies, and brand color expectations.
Kinda Aboushanab and Dr. Reem El Asaleh
Toronto Metropolitan University
This paper examines the convergence of Generative AI and Digital Asset Management (DAM) systems and their influence on graphic communications workflows. Using surveys and expert interviews, it explores adoption challenges, governance gaps, sustainability concerns, and Equity, Diversity, and Inclusion (EDI) implications. Results reveal widespread awareness of AI's potential but limited, compliant implementation due to weak governance structures and underutilized DAM infrastructure. The paper concludes that integrated DAM strategies and clear governance are essential to responsibly realize the creative and operational benefits of Generative AI. Full Abstract
The graphic communications industry is facing a rapidly evolving synergy between Generative Artificial Intelligence (AI) and Digital Asset Management (DAM) systems. While graphic communications relies on the use of graphics as the primary form of visual communication, AI is profoundly influencing branding, marketing, education, entertainment, and our understanding of complex data. From traditional print media to the digital landscape, it has evolved to become an indispensable part of our daily lives. The workflow of graphic communications encompasses ideation, production of tangible and digital items, management, and customer satisfaction. In recent years, it has undergone a transformative shift driven by technological advancements. Among the most promising and disruptive forces at the forefront of this evolution is the convergence of Generative AI and DAM.
The explosive growth of digital content has transformed production technologies, giving rise to the concept of “create once, use many”. This paradigm shift, coupled with the need to manage vast amounts of content, led to the emergence of DAM systems in the mid-1990s as a critical, centralized platform for storing, organizing, retrieving, and distributing digital assets. In recent years, the industry has embraced emerging technologies, such as AI that utilize algorithms and machine learning to enable computers to possess cognitive skills and competencies for sense-making and decision-making. Leveraging AI algorithms and neural networks to create digital content, marketing materials, and to conceptualize designs autonomously is poised to redefine production speeds and creative outputs. This research paper presents a formal investigation into this convergence as well as the effective governance of these technological pillars and examines their impact within the graphic communications landscape. Generative AI is revolutionizing the graphic communications industry, offering innovative design, production, and workflow optimization solutions. The rapid, high-volume output of synthetic content bypasses established quality control and rights management procedures, creating significant risks centred on intellectual property (IP) infringement, the ethical sourcing and legal provenance of training data, and the potential for embedded bias, touching upon crucial Equity, Diversity, and Inclusion (EDI) considerations in generated assets. Consequently, the absence of a cohesive, universal governance model to manage the AI-generated assets comprises the central challenge impeding the responsible and effective adoption of this evolving technology across workflows.
While the industry universally acknowledges the potential of Generative AI to revolutionize ideation and production processes, its compliant integration into structured workflows remains challenging. Key research questions conducted under The Creative School’s Lab of Excellence in Digital Asset Management (LED), include how Generative AI is currently being integrated into graphic communications practices’, how its functionality can enhance or transform DAM systems’ efficiency, creativity, and the accompanying EDI considerations and operational challenges associated with the use of Generative AI in DAM systems for graphic communications. To answer these questions and gain a comprehensive first-hand understanding, this study employed a mixed-methods approach utilizing both a Qualitative Inquiry and Quantitative Analysis. The quantitative component involved the distribution of a small-scale online survey addressing global professionals across various graphic communications sectors. Designed to capture data across five focus areas: demographic information, awareness of the technologies, DAM implementations, EDI considerations, as well as future perspectives within the industry. Respondents spanned diverse roles, including print production managers, digital marketing specialists, and DAM specialists across multiple regions, primarily in North America. This was paired with the qualitative component, which consisted of conducting interviews with industry leaders and seasoned practitioners, aimed at gaining in-depth subjective insights into their respective experiences, perspectives, and attitudes regarding both technologies, specifically focusing on the tangible possibilities for creative innovation, improved automation integration, and optimized operational efficiency, as well as the perceived risks with these emerging tools. The goal of this methodology was to broaden the scope of findings, identify what is actively being practiced in the industry, and capture the subjective experiences and decision-making factors guiding current technological investments.
The analysis of the collected data revealed several critical insights into the capabilities of these technologies, governance, sustainability, as well as future promise within the graphic communications industry. A key finding established among respondents was a significant gap between technological awareness and a compliance to implementation surrounding Generative AI: the majority of respondents demonstrated a high level of awareness regarding Generative AI capabilities and its potential for increased efficiency and creative conceptualization, but reported a fundamental lack of appropriate governance or established policies within their workplace necessary to utilize the technology safely or legally in a commercial context. The lack of governance proved to be the primary point preventing Generative AI from transitioning from being a rising technology to a widely implemented, compliant tool. Furthermore, the research found a notable segment of respondents within the industry are actively choosing not to utilize AI, specifically citing due to sustainability concerns, due to the high energy consumption and large text and image model training and operation. Consequently, the study observed a profound under-usage of organizational infrastructure: only a small fraction of respondents utilize DAM as a centralized network for their operations. This low implementation rate was primarily due to not a lack of awareness around DAM’s strategic value, but rather due to budgetary and scalability constraints. Unexpectedly, a notable fraction of responses received addressed potential EDI concerns; however 28% did not perceive Generative AI as having a significant impact, suggesting that while ethical considerations are front of mind, EDI challenges in AI adoption may not yet be fully understood or experienced by industry professionals.
Despite infrastructural deficits, observations show these professionals widely see the possibility and potential of these technologies’ efficacy within their workflows and recognize the necessity of a centralized platform. By exploring the intersection of generative AI and DAM within the context of graphic communications, this research seeks to shed light on the challenges, opportunities, and ethical considerations that will define the industry’s future. This research endeavour aims to contribute to the advancement of knowledge in the field and inform industry practices in graphic communications and digital asset management. In doing so, the information presented in this study will provide valuable insights for professionals, organizations, and researchers looking to navigate the evolving landscape and harness the full potential of these transformative technologies in graphic communications.
Charmaine Martinez
California Polytechnic State University
This study analyzes nanographic printing from a graphic designer's perspective, highlighting gaps in both design education and professional print workflow knowledge. It shows how nanographic printing differs from offset and other digital methods and identifies best practices for ideation, client communication, and file preparation to maximize the technology's expanded color gamut. The research evaluates press‑sheet results, ICC profiles, color spaces, and file formats using output from a Landa S10P press and supporting microscopic and photographic analysis. The findings provide practical guidance for designers while helping align educational curricula with real‑world printing capabilities. Full Abstract
Landa printing presses are at the forefront of B1 digital printing and offer a color gamut and visual richness that far exceeds four color process offset printing. Many graphic designers—students and professionals—are unfamiliar with current advances in print technology and lack knowledge of the types of projects that are appropriate for nanographic printing and proper workflow for building files optimized for Landa presses. This paper will view Landa digital printing technology through a designer’s lens with the goal of determining what information graphic designers need to optimize their workflow—from ideation to final production—for nanographic printing.
This paper will address the following questions:
• In what ways does nanographic printing differ from offset and other digital printing methods?
• How can graphic designers create projects that are optimized to take advantage of the full capabilities of nanographic printing? Are there best practices for project ideation, client communications, and file prep for Landa printing presses that should be part of the design process?
• Should these best practices be incorporated into undergraduate design education, which tends to promote conceptual, blue sky ideas and human-centered design over technical instruction?
Much of the industry literature about Landa printing presses focus on the technical capabilities of their nanographic machines and claims about color gamut and printing efficiency, without addressing how designers can prepare files most effectively and determine appropriate projects for this type of printing. Many professional designers rely on production artists and printers to transform their documents into print-ready files, and take a hands-off approach to print production. Without an adequate understanding of current advances in digital printing, designers may be missing opportunities to conceptualize and create projects that take full advantage of nanographic printing with its seven-color palette. In addition, undergraduate design education tends to ignore print production as a key skill for graduates. The assumption is often that these technical skills can be learned on the job and that many students may never work on print projects professionally.
Research for this paper will address the needs of designers and design educators, who may lack knowledge of current advances in print technology. Research will include an analysis of press sheets to compare the results of files set up in different color spaces to determine how various file formats and ICC profiles perform when printing on a Landa S10P press. Current test sheets (attached), designed by my colleague and printing expert Brian Lawler, include:
Charmaine Martinez • [email protected] • TAGA 2026 Technology in Action Proposal • 1
Color profile test targets – both RGB and CMYK
Color images in RGB, CMYK, bitmap (line art), and duotone color spaces.
Embedded profiles – a wide variety from sRGB to ProPhoto RGB
Color file formats: PSD, TIFF, SCT, PNG, Lab, EPS
Grayscale file format: PSD
Duotone image, saved as Photoshop EPS, with three select Pantone colors
Very small type
Very fine lines – positive and negative
Line art scans of very fine line engravings at 300, 600, 800, 1200 and 2400 ppi
0-255 grayscale ramp in RGB
0-100% ramps of Cyan, Magenta, Yellow and Black, separately, with linearity analysis
A named Pantone color
A color trapping opportunity (four colors all intersecting) to test trapping and register
Rich black and single-color black
Subtle pastel colors in RGB and CMYK
The test sheets show some unexpected results, especially in terms of color gamut between RGB and CMYK files. (see Appendix) Embedded profiles also performed unpredictably. These anomalies require further investigation in order to propose recommendations for image file formats that will produce the best printed results. In addition to an analysis of test sheets, research into the stability and lightfastness of Landa’s NanoInk® water-based inks will be used to determine what types of publications and printed materials are appropriate for nanographic printing, and if there are any limitations to this type of printing—including tests for light-fastness and archival characteristics.
Included in the research are ICC profiles made from the RGB test patch set and the CMYK test patch set, and analysis of the gamuts of color available in both. Volumetric diagrams of the color gamuts of RGB and CMYKOGB color will also be included. (see Appendix) Photomicrographs of the various test components will be provided, along with analysis of their characteristics. Special attention will be paid to how the Landa presses, and their embedded Fiery front-end system, process continuous-tone images and convert them into stochastic-pattern halftone images using the full gamut of colors available on the machines.
For some of the post-press analysis, we have compared the output of the Landa S10P press to the output of an Epson ink-jet printer in order to analyze the ICC profiles generated by printing the same color patch sets on both machines, then making profiles from the resulting printed sheets. Though vastly different in size and capability, the two machines share an expanded color gamut that allows for much greater volume of color than any CMYK device. (see Appendix)
Landa makes a number of bold claims about their presses. This research aims to address any gaps and/or omissions between Landa’s marketing materials and actual printed samples, while also providing insights into how graphic designers can leverage their creative work to take advantage of the full capabilities of nanographic printing.
Jinhee Nam and Renmei Xu
Ball State University
This study investigates color changes in digitally printed fabrics before and after washing, focusing on the color stability of commonly used apparel materials. Four fabric types were examined using a factorial experimental design with fabric type and CMYK color as independent variables. Standardized AATCC washing methods were applied, and color measurements were taken using CIE Lab* values recorded by a spectrophotometer. Color differences before and after washing were statistically analyzed using t‑tests to evaluate the effects of fabric type and color on post‑wash colorfastness. Full Abstract
Digital fabric printing has been in the market with multiple benefits such as vivid colors expression and feasible approach to produce samples with sustainable features. The available fabrics have been extended into a wider range of fabrics such as jersey, stretch fabrics, and rough-surface fabrics.
This study aimed to observe the color changes in fabrics before and after washing. The purpose of this study was to test the color changes and differences in four different types of fabrics with different textures: Cotton Spandex Jersey (93% cotton and 7% spandex), Sport Lycra (88 % polyester and 12 % Lycra), Dogwood Denim (100% cotton), and Polartec Fleece (100% polyester). They have been chosen as fabrics that have been frequently used in the apparel industry.
The research followed a factorial experimental design to examine the effects of fabric type and color on color stability after washing. The two independent variables in this study were fabric type, which consisted of four levels, and color, which consisted of four levels including cyan, magenta, yellow, and black. The washing method followed the washing procedure specified in AATCC Method 135-1995 (Shrinkage Test) and AATCC Method 124-1996 (Fabric Smoothness Test After Repeated Washing) with adjustment.
The dependent variables were the CIE L*a*b* values of CMYK colors, which were measured at three locations on each color patch for four different types of fabrics using an X-rite Spectrophotometer before and after washing. Then color differences were calculated to assess the extent of color change and compared using a t-test to study the influences of fabric type and color on post-wash colorfastness.
Don Schroeder and Ben Lubin
FUJIFILM North America Corporation
This session provides an update on ISO's worldwide objectives and the current goals of USA TC/130, including an overview of ISO TC 130's structure and active working groups. It offers an up‑to‑date review of current and emerging ISO TC 130 standards, examined through both technical and business‑impact lenses. The presentation highlights recent and planned updates to ISO 12647‑2, including the proposed inclusion of Near Neutral Calibration (G7) alongside traditional TVI‑based press controls, and discusses ongoing development of the UCD. An interactive Q&A will address questions about ISO TC 130 standards and how industry stakeholders can participate in and influence future standardization.
Umme Abiha
Toronto Metropolitan University
The accessibility in CPG packaging is routinely marginalized, treated as an afterthought despite increasing awareness and regulation. Using a mixed‑methods approach with Microsoft's Seeing AI and a supporting literature review, we identified widespread issues with readability, contrast, tactile features, and accessible digital content across eleven products. The findings indicate that internal misconceptions about cost and time, rather than technical limitations, are the primary barriers, even though early inclusive design demonstrably reduces long‑term risk and improves efficiency. The paper concludes that embedding accessibility into packaging design is both an ethical responsibility and a strategic business opportunity. Full Abstract
This thesis investigates the ongoing marginalization of accessibility in the Consumer Packaged Goods (CPG) industry, challenging the persistent perception of accessibility as an afterthought rather than a design priority. Despite increased awareness and evolving regulatory frameworks, accessibility in packaging continues to be inconsistently implemented or entirely neglected. Many companies cite cost, time constraints, or competing business priorities as barriers, reflecting deep-rooted misconceptions that hinder meaningful progress toward inclusivity.
Through a mixed-method approach—combining primary research using Microsoft’s Seeing AI app with an extensive literature review—this study evaluates the real-world accessibility of product packaging and examines how these misconceptions manifest in practice. Eleven products across various CPG categories were assessed using five criteria: recognition accuracy, text readability, contrast and font size, tactile features, and the presence of digital accessibility tools such as QR codes.
The Seeing AI app successfully identified all products, confirming its effectiveness in basic object recognition. However, the tool revealed significant shortcomings in communicating critical product details, particularly when packaging lacked adequate contrast, legible typography, or accessible digital content. Four of the eleven tested products had readability issues, and none included Braille or tactile indicators. Additionally, only two products featured QR codes, neither of which offered accessibility-focused information. These findings align with literature emphasizing that while accessibility technologies exist, their impact is constrained by the packaging industry’s failure to design inclusively from the outset. The absence of tactile or digital accessibility elements reinforces the notion that companies often approach accessibility reactively, introducing minimal measures only when required by law or public scrutiny.
The discussion of these findings reveals a broader pattern of industry-wide neglect. Accessibility remains sidelined within corporate structures that prioritize aesthetics, marketing, and brand uniformity over usability and inclusivity. Cost and time misconceptions persist as recurring justifications, despite evidence showing that early integration of accessibility reduces redesign costs, strengthens consumer trust, and improves workflow efficiency. The results support research by Sloan, Moulton, and others, which highlights that inaccessible design incurs higher long-term expenses through legal risks, consumer dissatisfaction, and missed opportunities. Rather than being a financial burden, accessibility represents a sustainable investment in both innovation and competitiveness. Yet, the continued absence of accessible packaging demonstrates that internal misperceptions, not technological limitations, are the real obstacles to inclusion.
Equally important is the role of accountability. Many brands promote diversity and inclusion in their marketing but fail to reflect those values in their packaging design. The inconsistency observed across CPG formats—ranging from food to personal care and pharmaceuticals—shows that accessibility is neither standardized nor enforced. This disconnect underscores systemic gaps between corporate messaging, regulatory frameworks, and actual consumer experience. Even in countries with established accessibility legislation, such as Canada, the lack of specific packaging mandates allows businesses to deprioritize inclusive design without consequence. The data collected in this study, combined with supporting literature, points to a need for stronger oversight, clearer design standards, and measurable accountability frameworks within the CPG sector.
Ultimately, the findings illustrate that accessibility is both a moral and strategic imperative. Accessible packaging fosters consumer autonomy, supports public safety, and enhances brand reputation—while its absence perpetuates inequity and consumer exclusion. As this research demonstrates, accessibility should not be treated as a retroactive fix or compliance checkbox but as a proactive component of innovation and product development. Integrating accessibility from the earliest design stages—through user-centred design, co-creation with consumers with disabilities, and inclusive testing—offers tangible advantages: reduced production costs, increased customer loyalty, and access to a growing market valued at over $13 trillion globally.
This thesis concludes that the continued neglect of accessibility in the CPG industry reflects a failure of strategic vision rather than technological capability. By shifting from reactive compliance to proactive inclusion, companies can bridge the gap between intention and implementation, aligning social responsibility with commercial success. The evidence presented reinforces that accessible packaging is not only a legal and ethical obligation but also a significant business opportunity. To remain competitive and relevant, CPG brands must embed accessibility into the core of their design and marketing strategies—ensuring that inclusivity is not merely promised, but practiced.
Carl Blue
Clemson University
This study proposes a new Variable Data Printing (VDP) paradigm that integrates GPT‑5 as an adaptive design engine within digital print and communication workflows. By enabling semantic reasoning, dynamic content generation, and conditional design logic, GPT‑5 expands VDP beyond static template-based personalization while incorporating sustainability metrics such as E‑ROI. A simulated campaign demonstrates how CRM data can drive AI‑generated personalized print and digital outputs with improved efficiency and reduced material waste. The research also addresses educational implications, positioning AI‑driven VDP as a critical skill set for future graphic communication professionals. Full Abstract
This study introduces a new paradigm for Variable Data Printing (VDP) that integrates large language models—specifically GPT-5—as adaptive design engines within digital print and communication workflows. Drawing on foundational VDP research (Acuna-Stamp, 2000; Gore et al., 2009; McAllister, 2011; Simske et al., 2012), the project examines how AI can interpret structured datasets, apply conditional design logic, and generate personalized creative content for both print and digital channels.
Traditionally, VDP merges text and imagery from databases into static templates. GPT-5 expands this capacity by performing semantic reasoning, generating copy and visual prompts dynamically, and applying optimization criteria such as Simske’s Environmental Return on Investment (E-ROI) model to reduce material waste. The result is an intelligent workflow that merges data, design, and sustainability—producing unique communications while improving operational efficiency and environmental performance.
The presentation demonstrates a simulated campaign workflow using CRM-style data to generate personalized postcards and digital ads. It also outlines the pedagogical implications for preparing students in graphic communications to design, manage, and evaluate AI-enhanced print systems aligned with Industry 4.0 skill sets.
This research provides a framework for AI-driven VDP as a bridge between creativity, data literacy, and sustainable production, advancing both professional practice and design education.
Amanda Bridges, Celeste Calkins, Erica Walker
Clemson University and Illinois State
This study evaluates if emerging digital textile printing technologies can support a shift from fast fashion toward more sustainable, short‑run "slow fashion" production. Focusing on color accuracy, the research examines how well DTF reproduces a broad color spectrum on different textile compositions, with and without a white ink base, when no custom color profiles are applied. Repeated color targets are used to assess variability and identify colors that present greater accuracy challenges in digital textile workflows. The findings aim to inform manufacturers, brand owners, and educators about the feasibility of achieving reliable brand color in short‑run digital textile printing. Full Abstract
The market for printed textiles is a rapidly growing segment of the industry due in large part to the increase in on-demand and short run printing. “Fast fashion” is transforming the textile industry in ways that many perceive to be environmentally unfriendly and not sustainable. The focus of “fast fashion” is on cheaper manufacturing processes, mass consumption, and short-term garment use (Niinimaki et al., 2020). While screen printing remains dominant for mass production, digital printing technologies are on the rise for smaller, more customized print runs that often feed the “fast fashion” trend. This study is designed to determine if digital technology could support a transition back to a more traditional “slow fashion” industry should consumer behavior and environmental advocates demand that change.
Direct-to-garment and direct-to-film printing are two relatively new technologies that are increasing in popularity. Direct-to-garment printing uses a modified inkjet technology that sprays ink directly onto the textile (Walker, 2022). Direct-to-film prints onto a heat-transfer film that is then transferred to the garment using a special adhesive powder and heat press (Montoya, 2025). Some advantages of these processes include outstanding image detail, higher energy efficiency, and versatility of fabric (Walker, 2022, Montoya, 2025). However, these emerging technologies also introduce some challenges for manufacturers. Some notable issues include color accuracy, the ability to ensure consistent color when transitioning from spot colors in screen printing to CMYK in digital print methods, and maintaining color durability over time. For this study, color accuracy refers to the recreation of a specified color onto the textile. This becomes highly important in branding contexts where a brand insists on color accuracy across all products irrespective of manufacturing process or substrate type. Color consistency refers to the reproducibility of a color over repeated prints. In some cases, there may be external or environmental changes that result in diminished color consistency. This study will specifically focus on color accuracy in direct-to-film printing using a variety of textile compositions, including polyester and cotton/polyester blends with a test target consisting of a broad spectrum of colors. Each color will be repeated three times on the target to ensure that the process does not introduce significant variance dependent on where a color is found in the design.
Research questions include: How accurately does the direct-to-film process reproduce color if we do not apply a custom color profile? Within that analysis, we will also examine the color accuracy difference with white ink as a base beneath the print and without white ink.
One of the goals of this study is to determine if there are specific colors that are more difficult to accurately and consistently print with direct-to-film. Further, we will consider how CMYK inks are impacted with and without a white base when printing on a white shirt.
The results of this study will be meaningful for several different audiences including textile manufacturers, brand holders, educators, and consumers. The results of this study can provide textile manufacturers with color accuracy recommendations and potentially inform brand holders about whether short-run orders provide accurate brand color. Results will also be beneficial to graphic communications educators who are seeking out effective strategies for teaching students about color management, the difficulties associated with digital print technologies, and the unique challenges textile printing presents in the area of color management.