2:30pm - 2:50pmModified Murphree Efficiency for Realistic Modeling of Liquid-Liquid Extraction Stage Efficiencies
Mahdi Mousavi, Ville Alopaeus
Aalto University, Finland
Liquid-liquid extraction (LLX) is a fundamental separation method in chemical engineering, widely used for the separation of components based on their solubility in two immiscible liquids (Thornton, 1996). Accurately modeling stage efficiencies in LLX processes is essential for reliable process design and optimization. However, conventional process simulation software often struggles with precise representation of LLX stage efficiencies (e.g. Aspen Plus). The default methods, such as directly applying efficiency factors to distribution coefficients, can distort equilibrium calculations and lead to inaccurate simulation results. This study introduces a modified version of Murphree Efficiency definition to more accurately model LLX stage efficiencies within process simulators, enhancing the reliability and accuracy of process simulations.
To address the limitations of existing efficiency definitions in simulation software, we revisited the concept of stage efficiency, focusing on the Murphree efficiency. While Murphree efficiency is widely used in distillation processes, its direct application to LLX is not straightforward due to the differences in phase behavior. We modified the standard Murphree efficiency by substituting mole flows for mole fractions, aligning the efficiency calculation with the mass transfer characteristics of LLX processes. This modification accounts for deviations from equilibrium caused by factors such as insufficient mixing, mass transfer resistance, and other operational inefficiencies, providing a more realistic representation of the actual performance of extraction stages.
Implementing this modified efficiency model involved creating a multi-stage LLX column within a custom modeling environment, specifically Aspen Custom Modeler (ACM). Each stage in the model applies the modified Murphree efficiency, allowing for a detailed and realistic simulation of the extraction process. The custom model was then integrated into Aspen Plus, enabling users to perform simulations that reflect real-world inefficiencies and operational conditions in LLX processes. This integration provides greater control over the simulation, including the precise application of stage efficiencies, and overcomes the constraints of default efficiency calculations in standard simulation software.
To demonstrate the effectiveness of the modified efficiency model, we conducted simulations using an acetone-water system with 3-methylhexane as the solvent. The results showed that the modified efficiency definition predicts the LLX process performance more realistic across various efficiency levels. At zero efficiency, the model correctly indicates no extraction or phase separation, while at full efficiency, the system reaches equilibrium conditions. This validation confirms the model's ability to capture the full spectrum of operational efficiencies in LLX processes.
The novelty of this work lies in the modification of the original Murphree efficiency definition specifically for LLX processes. By substituting mole flows for mole fractions, we adapted the standard Murphree efficiency to better align with the mass transfer characteristics of LLX. This modification accounts for deviations from equilibrium due to operational inefficiencies, providing a more realistic representation of extraction stage performance. The modified efficiency model is tested by creating a custom multi-stage LLX column within ACM and integrating it into AP. While our implementation utilizes Aspen Plus and ACM, the principles and methodology are applicable to other simulation environments, potentially broadening the impact of this approach within the chemical engineering community.
2:50pm - 3:10pmDifferentiation between Process and Equipment Drifts in Chemical Plants
Linda Eydam, Lukas Furtner, Julius Lorenz, Leon Urbas
TU Dresden, Germany
The performance of chemical plants is inevitably related to knowledge about the underlying process as well as the deployed equipment. However, equipment drifts make it difficult to obtain accurate process information. Measurement deviations caused by equipment malfunction may be misinterpreted as process drifts and vice versa. It gets even more complex to clearly determine the cause of the drift and the proper course of action when such equipment drifts occur in combination with process drifts. Additional information, which can be provided by the second channel [1], has the potential to enable recognition and decoupling of coupled drifts. The second channel is an interface to the automation pyramid, where additional data can be read out and used without impacting the automation system [1].
In this work, a method is presented that uses additional data to detect and decouple coupled drifts. To achieve this goal, a combination of existing approaches is required. Data analysis using statistical methods and quantitative model-based approaches are combined. Statistical approaches divide the data into clusters [2]. Clusters are areas with different characteristics, such as an area with a certain drift or an area without any drift. The problem is that after such a cluster analysis, it is not possible to determine which cluster belongs to which drift just by statistical approaches. For this reason, possible drifts are modeled, theoretical data points are generated, and theoretical clusters are formed. These theoretical clusters are compared to the real clusters. Through this comparison, the real clusters can be assigned to the drifts. The idea of decoupling is to reconstruct the drifts and span drift vectors to characterize them. Data points that are drift couplings become interpretable vectors. Drift decomposition disassembles these data point vectors into individual drifts.
The application of the developed method on a use case in a brownfield chemical plant showed that the method successfully detects and distinguishes simultaneously appearing process and equipment drifts.
Sources [1] J. de Caigny, T. Himmelsbach, and R. Huck, “NOA-Konzept, NE 175,” in NAMUR Open Architecture (NOA) Das Konzept zur Öffnung der Prozessautomatisierung, T. Tauchnitz, Ed., Essen: Vulkan Verlag, 2021, pp. 5–9. [2] R. Dunia and S. Joe Qin, “Joint diagnosis of process and sensor faults using principal component analysis,” Control Engineering Practice, vol. 6, no. 4, Art. no. 4, Apr. 1998, doi: 10.1016/S0967-0661(98)00027-6.
3:10pm - 3:30pmA superstructure approach for optimization of Simulated Moving Bed (SMB) chromatography
Eva Sorensen, Dian Ning Chia, Fanyi Duanmu
University College London, United Kingdom
High-performance liquid chromatography (HPLC) is one of the main separation methods in the pharmaceutical industry. HPLC can be operated both in batch and continuous mode, although the latter is only slowly emerging as a processing alternative, mainly due to the complexity of both its design and its operation. The most successful continuous HPLC process for drug manufacturing is the Simulated Moving Bed (SMB). SMB is a multi-column, continuous, chromatographic process that can handle much higher throughputs than regular batch chromatographic processes. The process is initially transient, but eventually arrives at a cyclic steady state, which makes optimization very challenging. SMB usually has four sections, the desorbent, extract, feed and raffinate sections, and simulates the counter-current flow between the stationary and mobile phases through periodical and synchronous switching of the inlet and outlet ports in the direction of fluid flow. Each SMB section can have a different number of columns, which must be determined carefully due to physical limitations (e.g. pressure drop) and the significant effect on separation performance. To the best of our knowledge, however, existing studies either pre-fixed the column configuration (number of columns per section) or optimized each possible configuration individually, which clearly results in a sub-optimal design and/or is very time-consuming. This work therefore proposes a superstructure approach that allows for simultaneous optimization of the number of columns in each section, as well as the column dimensions (length and diameter), the switching times, and the flow rates. A single superstructure optimization, which is typically a mixed integer non-linear programming (MINLP) problem, is therefore sufficient to obtain not only the optimal configuration, but also the entire design as well as the operation procedure.
This work focuses not only on the optimal design of SMB using a superstructure approach, but also on the steps required and the challenges faced when constructing such a superstructure model, taking into account the transient startup and the final cyclic steady state. Depending on the purpose of the study, SMB processes can be modelled either with partial discretization (i.e. only temporal domain discretized) or full discretization (i.e. temporal and spatial domains discretized); with the former being used for studying dynamic behaviour such as start-up conditions while the latter is required for optimization purposes. This work first validates both the partially and fully discretized superstructure models against experimental results reported in the literature. Then, superstructure optimization based on full discretization is considered and compared with individual optimizations of the possible structures for a given case study. The results show that the superstructure optimization proposed in this work can converge to the best column structure with significantly lower computation time.
3:30pm - 3:50pmDesign Space Exploration via Gaussian Process Regression and Alpha Shape Visualization
Elizaveta Marich, Andrea Galeazzi, Foteini Michalopoulou, Steven Sachio, Maria M. Papathanasiou
Sargent Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College London, United Kingdom
Efficient identification of the design space (DSp) is crucial for optimizing chemical process development, ensuring adherence to industry standards for product quality, safety, and performance. However, traditional methods often struggle with the computational challenges posed by multi-dimensional, non-convex problems [1,2,3]. In response, we propose a novel approach that combines Gaussian Process Regression (GPR) with alpha shape reconstruction to efficiently evaluate and visualize design spaces across varying dimensionalities.
Our methodology focuses on reducing the computational complexity of knowledge space generation by employing GPR surrogate models enhanced through an integrated kernel optimization step. Using a greedy tree search algorithm to identify the optimal composite kernel [4], the approach significantly improves the model's ability to capture intricate, non-linear relationships within the design space.
To define the boundaries of the feasible region without assuming convexity, we utilize alpha shape reconstruction. This technique extends the concept of convex hulls to handle non-convex and disjoint shapes, providing an accurate representation of complex design spaces [3]. The alpha shape reconstruction is implemented using the 'dside' Python package developed by Sachio et al. [5], which integrates Delaunay Triangulations and a bisection search to determine the largest alpha radius, effectively reconstructing the design space.
We assess the effectiveness of the proposed methodology through case studies involving constrained non-convex functions and engineering design problems across two- to seven-dimensional spaces. The results demonstrate that our approach can accurately reconstruct complex design spaces while requiring significantly fewer computational resources compared to existing surrogate-based methods.
References:
- Grossmann, I. E., Halemane, K. P., & Swaney, R. E. (1983). Optimization strategies for flexible chemical processes. Computers and Chemical Engineering, 7, 439–462.
- Ierapetritou, M. G., & Pistikopoulos, E. N. (2018). Optimization approaches for design and planning of flexible chemical plants. In Process Systems Engineering: Volume 1: Process Modeling, Simulation and Control (pp. 147-184). Wiley.
- Geremia, M., Bezzo, F., Ierapetritou, M.G. (2023). A novel framework for the identification of complex feasible space. Computers & Chemical Engineering, 179, 108427.
- Duvenaud, D., Lloyd, J.R., Grosse, R., Tenenbaum, J.B., Ghahramani, Z. (2013). Structure discovery in nonparametric regression through compositional kernel search. In Proceedings of the 30th International Conference on Machine Learning (pp. 1166–1174).
- Sachio, S., Kontoravdi, C., Papathanasiou, M.M. (2023). A model-based approach towards accelerated process development: A case study on chromatography. Chemical Engineering Research and Design, 197, 800–820.
3:50pm - 4:10pmLangmuir.jl: An efficient and composable Julia package for adsorption thermodynamics.
Vinicius Viena Santana1, Andrés Riedemann3, Pierre Walker2, Idelfonso Nogueira1
1Norwegian University of Science and Technology; 2California Institute of Technology; 3Universidad de Concepción
Recent advancements in material design have made adsorption a more energy-efficient alternative to traditional thermally driven separation processes. Accurate modelling of adsorption thermodynamics is crucial for designing and operating equilibrium-limited adsorption systems. While high-quality open-source packages like PyIAST1, PyGAPs2, and Ruptura3 are available for processing adsorption data, they operate in isolated ecosystems with limited integration with other computational tools. For example, calculating the isosteric heat of adsorption for single or multi-component systems requires derivatives, which can be error-prone, time-consuming and challenging to maintain for new isotherms if done manually. Automatic differentiation (AD) frameworks are a potential solution to this problem, but in most AD engines, many package elements must be rewritten to accommodate specific AD object types.
Langmuir.jl addresses these limitations by leveraging Julia's composable and differentiable programming ecosystem. Langmuir.jl includes tools for processing adsorption thermodynamics data—loading data, fitting isotherms with most often used models, predictive multicomponent adsorption through Ideal Adsorption Solution Theory (IAST) — and, importantly, enabling accurate derivative calculations through Julia's automatic differentiation libraries, without requiring extensive code adjustments. Additionally, it integrates seamlessly with Clapeyron.jl4 for rigorous fluid-phase behaviour modelling - an aspect that most implementations neglect and has been increasingly important for high-pressure gas storage, e.g., hydrogen.
[1] Simon, C. M., Smit, B., & Haranczyk, M. (2016). PyIAST: Ideal adsorbed solution theory (IAST) Python package. Computer Physics Communications, 200, 364-380. https://doi.org/10.1016/j.cpc.2015.11.016
[2] Iacomi, P., & Llewellyn, P. L. (2019). pyGAPS: A Python-based framework for adsorption isotherm processing and material characterisation. Adsorption, 25(8), 1533-1542. https://doi.org/10.1007/s10450-019-00168-5
[3] Sharma, S., Balestra, S. R. G., Baur, R., Agarwal, U., Zuidema, E., Rigutto, M. S., Calero, S., Vlugt, T. J. H., & Dubbeldam, D. (2023). RUPTURA: Simulation code for breakthrough, ideal adsorption solution theory computations, and fitting of isotherm models. Molecular Simulation, 49(9), 893-953. https://doi.org/10.1080/08927022.2023.2202757
[4] Walker, P. J., Yew, H.-W., & Riedemann, A. (2022). Clapeyron.jl: An extensible, open-source fluid thermodynamics toolkit. Industrial & Engineering Chemistry Research, 61(20), 7130-7153. https://doi.org/10.1021/acs.iecr.2c00326
|