Search results for: manufacturing optimization
3193 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation
Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu
Abstract:
This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.Keywords: machine learning, neural network, pressurized water reactor, supervisory controller
Procedia PDF Downloads 1573192 Reducing The Frequency of Flooding Accompanied by Low pH Wastewater In 100/200 Unit of Phosphate Fertilizer 1 Plant by Implementing The 3R Program (Reduce, Reuse and Recycle)
Authors: Pradipta Risang Ratna Sambawa, Driya Herseta, Mahendra Fajri Nugraha
Abstract:
In 2020, PT Petrokimia Gresik implemented a program to increase the ROP (Run Of Pile) production rate at the Phosphate Fertilizer 1 plant, causing an increase in scrubbing water consumption in the 100/200 area unit. This increase in water consumption causes a higher discharge of wastewater, which can further cause local flooding, especially during the rainy season. The 100/200 area of the Phosphate Fertilizer 1 plant is close to the warehouse and is often a passing area for trucks transporting raw materials. This causes the pH in the wastewater to become acidic (the worst point is up to pH 1). The problem of flooding and exposure to acidic wastewater in the 100/200 area of Phosphate Fertilizer Plant 1 was then resolved by PT Petrokimia Gresik through wastewater optimization steps called the 3R program (Reduce, Reuse, and Recycle). The 3R (Reduce, reuse, and recycle) program consists of an air consumption reduction program by considering the liquid/gas ratio in scrubbing unit of 100/200 Phosphate Fertilizer 1 plant, creating a wastewater interconnection line so that wastewater from unit 100/200 can be used as scrubbing water in the Phonska 1, Phonska 2, Phonska 3 and unit 300 Phosphate Fertilizer 1 plant and increasing scrubbing effectiveness through scrubbing effectiveness simulations. Through a series of wastewater optimization programs, PT Petrokimia Gresik has succeeded in reducing NaOH consumption for neutralization up to 2,880 kg/day or equivalent in saving up to 314,359.76 dollars/year and reducing process water consumption up to 600 m3/day or equivalent in saving up to 63,739.62 dollars/year.Keywords: fertilizer, phosphate fertilizer, wastewater, wastewater treatment, water management
Procedia PDF Downloads 293191 Simulation and Controller Tunning in a Photo-Bioreactor Applying by Taguchi Method
Authors: Hosein Ghahremani, MohammadReza Khoshchehre, Pejman Hakemi
Abstract:
This study involves numerical simulations of a vertical plate-type photo-bioreactor to investigate the performance of Microalgae Spirulina and Control and optimization of parameters for the digital controller by Taguchi method that MATLAB software and Qualitek-4 has been made. Since the addition of parameters such as temperature, dissolved carbon dioxide, biomass, and ... Some new physical parameters such as light intensity and physiological conditions like photosynthetic efficiency and light inhibitors are involved in biological processes, control is facing many challenges. Not only facilitate the commercial production photo-bioreactor Microalgae as feed for aquaculture and food supplements are efficient systems but also as a possible platform for the production of active molecules such as antibiotics or innovative anti-tumor agents, carbon dioxide removal and removal of heavy metals from wastewater is used. Digital controller is designed for controlling the light bioreactor until Microalgae growth rate and carbon dioxide concentration inside the bioreactor is investigated. The optimal values of the controller parameters of the S/N and ANOVA analysis software Qualitek-4 obtained With Reaction curve, Cohen-Con and Ziegler-Nichols method were compared. The sum of the squared error obtained for each of the control methods mentioned, the Taguchi method as the best method for controlling the light intensity was selected photo-bioreactor. This method compared to control methods listed the higher stability and a shorter interval to be answered.Keywords: photo-bioreactor, control and optimization, Light intensity, Taguchi method
Procedia PDF Downloads 3953190 Superamolecular Chemistry and Packing of FAMEs in the Liquid Phase for Optimization of Combustion and Emission
Authors: Zeev Wiesman, Paula Berman, Nitzan Meiri, Charles Linder
Abstract:
Supramolecular chemistry refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular sub units or components. Biodiesel components self arrangements is closely related/affect their physical properties in combustion systems and emission. Due to technological difficulties, knowledge regarding the molecular packing of FAMEs (biodiesel) in the liquid phase is limited. Spectral tools such as X-ray and NMR are known to provide evidences related to molecular structure organization. Recently, it was reported by our research group that using 1H Time Domain NMR methodology based on relaxation time and self diffusion coefficients, FAMEs clusters with different motilities can be accurately studied in the liquid phase. Head to head dimarization with quasi-smectic clusters organization, based on molecular motion analysis, was clearly demonstrated. These findings about the assembly/packing of the FAME components are directly associated with fluidity/viscosity of the biodiesel. Furthermore, these findings may provide information of micro/nano-particles that are formed in the delivery and injection system of various combustion systems (affected by thermodynamic conditions). Various relevant parameters to combustion such as: distillation/Liquid Gas phase transition, cetane number/ignition delay, shoot, oxidation/NOX emission maybe predicted. These data may open the window for further optimization of FAME/diesel mixture in terms of combustion and emission.Keywords: supermolecular chemistry, FAMEs, liquid phase, fluidity, LF-NMR
Procedia PDF Downloads 3413189 Aluminum Matrix Composites Reinforced by Glassy Carbon-Titanium Spatial Structure
Authors: B. Hekner, J. Myalski, P. Wrzesniowski
Abstract:
This study presents aluminum matrix composites reinforced by glassy carbon (GC) and titanium (Ti). In the first step, the heterophase (GC+Ti), spatial form (similar to skeleton) of reinforcement was obtained via own method. The polyurethane foam (with spatial, open-cells structure) covered by suspension of Ti particles in phenolic resin was pyrolyzed. In the second step, the prepared heterogeneous foams were infiltrated by aluminium alloy. The manufactured composites are designated to industrial application, especially as a material used in tribological field. From this point of view, the glassy carbon was applied to stabilise a coefficient of friction on the required value 0.6 and reduce wear. Furthermore, the wear can be limited due to titanium phase application, which reveals high mechanical properties. Moreover, fabrication of thin titanium layer on the carbon skeleton leads to reduce contact between aluminium alloy and carbon and thus aluminium carbide phase creation. However, the main modification involves the manufacturing of reinforcement in the form of 3D, skeleton foam. This kind on reinforcement reveals a few important advantages compared to classical form of reinforcement-particles: possibility to control homogeneity of reinforcement phase in composite material; low-advanced technique of composite manufacturing- infiltration; possibility to application the reinforcement only in required places of material; strict control of phase composition; High quality of bonding between components of material. This research is founded by NCN in the UMO-2016/23/N/ST8/00994.Keywords: metal matrix composites, MMC, glassy carbon, heterophase composites, tribological application
Procedia PDF Downloads 1183188 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 1713187 Research on the Function Optimization of China-Hungary Economic and Trade Cooperation Zone
Authors: Wenjuan Lu
Abstract:
China and Hungary have risen from a friendly and comprehensive cooperative relationship to a comprehensive strategic partnership in recent years, and the economic and trade relations between the two countries have developed smoothly. As an important country along the ‘Belt and Road’, Hungary and China have strong economic complementarities and have unique advantages in carrying China's industrial transfer and economic transformation and development. The construction of the China-Hungary Economic and Trade Cooperation Zone, which was initiated by the ‘Sino-Hungarian Borsod Industrial Zone’ and the ‘Hungarian Central European Trade and Logistics Cooperation Park’ has promoted infrastructure construction, optimized production capacity, promoted industrial restructuring, and formed brand and agglomeration effects. Enhancing the influence of Chinese companies in the European market has also promoted economic development in Hungary and even in Central and Eastern Europe. However, as the China-Hungary Economic and Trade Cooperation Zone is still in its infancy, there are still shortcomings such as small scale, single function, and no prominent platform. In the future, based on the needs of China's cooperation with ‘17+1’ and China-Hungary cooperation, on the basis of appropriately expanding the scale of economic and trade cooperation zones and appropriately increasing the number of economic and trade cooperation zones, it is better to focus on optimizing and adjusting its functions and highlighting different economic and trade cooperation. The differentiated function of the trade zones strengthens the multi-faceted cooperation of economic and trade cooperation zones and highlights its role as a platform for cooperation in information, capital, and services.Keywords: ‘One Belt, One Road’ Initiative, China-Hungary economic and trade cooperation zone, function optimization, Central and Eastern Europe
Procedia PDF Downloads 1803186 A User-Directed Approach to Optimization via Metaprogramming
Authors: Eashan Hatti
Abstract:
In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.Keywords: optimization, metaprogramming, logic programming, abstraction
Procedia PDF Downloads 883185 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods
Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar
Abstract:
Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.Keywords: bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman
Procedia PDF Downloads 1983184 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0
Authors: Harris Niavis, Dimitra Politaki
Abstract:
The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.Keywords: blockchain, data quality, industry4.0, product quality
Procedia PDF Downloads 1913183 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem
Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee
Abstract:
Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research
Procedia PDF Downloads 3373182 Stability Optimization of NABH₄ via PH and H₂O:NABH₄ Ratios for Large Scale Hydrogen Production
Authors: Parth Mehta, Vedasri Bai Khavala, Prabhu Rajagopal, Tiju Thomas
Abstract:
There is an increasing need for alternative clean fuels, and hydrogen (H₂) has long been considered a promising solution with a high calorific value (142MJ/kg). However, the storage of H₂ and expensive processes for its generation have hindered its usage. Sodium borohydride (NaBH₄) can potentially be used as an economically viable means of H₂ storage. Thus far, there have been attempts to optimize the life of NaBH₄ (half-life) in aqueous media by stabilizing it with sodium hydroxide (NaOH) for various pH values. Other reports have shown that H₂ yield and reaction kinetics remained constant for all ratios of H₂O to NaBH₄ > 30:1, without any acidic catalysts. Here we highlight the importance of pH and H₂O: NaBH₄ ratio (80:1, 40:1, 20:1 and 10:1 by weight), for NaBH₄ stabilization (half-life reaction time at room temperature) and corrosion minimization of H₂ reactor components. It is interesting to observe that at any particular pH>10 (e.g., pH = 10, 11 and 12), the H₂O: NaBH₄ ratio does not have the expected linear dependence with stability. On the contrary, high stability was observed at the ratio of 10:1 H₂O: NaBH₄ across all pH>10. When the H₂O: NaBH₄ ratio is increased from 10:1 to 20:1 and beyond (till 80:1), constant stability (% degradation) is observed with respect to time. For practical usage (consumption within 6 hours of making NaBH₄ solution), 15% degradation at pH 11 and NaBH₄: H₂O ratio of 10:1 is recommended. Increasing this ratio demands higher NaOH concentration at the same pH, thus requiring a higher concentration or volume of acid (e.g., HCl) for H₂ generation. The reactions are done with tap water to render the results useful from an industrial standpoint. The observed stability regimes are rationalized based on complexes associated with NaBH₄ when solvated in water, which depend sensitively on both pH and NaBH₄: H₂O ratio.Keywords: hydrogen, sodium borohydride, stability optimization, H₂O:NaBH₄ ratio
Procedia PDF Downloads 1243181 Chaotic Sequence Noise Reduction and Chaotic Recognition Rate Improvement Based on Improved Local Geometric Projection
Authors: Rubin Dan, Xingcai Wang, Ziyang Chen
Abstract:
A chaotic time series noise reduction method based on the fusion of the local projection method, wavelet transform, and particle swarm algorithm (referred to as the LW-PSO method) is proposed to address the problem of false recognition due to noise in the recognition process of chaotic time series containing noise. The method first uses phase space reconstruction to recover the original dynamical system characteristics and removes the noise subspace by selecting the neighborhood radius; then it uses wavelet transform to remove D1-D3 high-frequency components to maximize the retention of signal information while least-squares optimization is performed by the particle swarm algorithm. The Lorenz system containing 30% Gaussian white noise is simulated and verified, and the phase space, SNR value, RMSE value, and K value of the 0-1 test method before and after noise reduction of the Schreiber method, local projection method, wavelet transform method, and LW-PSO method are compared and analyzed, which proves that the LW-PSO method has a better noise reduction effect compared with the other three common methods. The method is also applied to the classical system to evaluate the noise reduction effect of the four methods and the original system identification effect, which further verifies the superiority of the LW-PSO method. Finally, it is applied to the Chengdu rainfall chaotic sequence for research, and the results prove that the LW-PSO method can effectively reduce the noise and improve the chaos recognition rate.Keywords: Schreiber noise reduction, wavelet transform, particle swarm optimization, 0-1 test method, chaotic sequence denoising
Procedia PDF Downloads 2003180 Pragmatism in Adaptive Reuse of Obsolete Industrial Land in China
Authors: Yong Li
Abstract:
Major cities in China has experienced a shift from production based on manufacturing industry to tertiary industry. How to make a better use of existing obsolete industrial land within urban cores has become a difficult problem for many policymakers. City governments regard old manufacturing industrial land as an important source of land to facilitate the development of the cities. Despite the announcement of policies in promoting that, a large portion of industrial land is still not properly redeveloped and most of them became obsolete. The study uses the project of Xinyi International Club as a case to examine the process of adaptive reuse of obsolete industrial space in Guangzhou, China. It attempts to elucidate the underlying mechanisms by identifying the key forces from both the government and the private sectors in influencing the process. The study found that market forces in transforming industrial space are exerting a strong impact on the existing land use planning system in Chinese cities. Pragmatic relaxation of the formal land use the regulatory framework and government supportive land-use intervention have also been crucial towards achieving successful implementation of the restructuring project and making it a showcase. This study questions whether these extraordinary measures, in particular, the use of temporary land use permit, are sustainable in facilitating the transformation of derelict industrial land, and in informing future industrial land-use restructuring policies. It concludes that, while the land use regulatory system in China is becoming increasingly dynamic and flexible, it remains ill-equipped in responding positively to the market, which is characterized by an increasing bargaining power of the private sector. A comprehensive appraisal of the overall impacts of these adaptive re-uses on society is wanting.Keywords: China, land alteration, obsolete industrial properties, urban planning
Procedia PDF Downloads 1483179 The Use of Industrial Ecology Principles in the Production of Solar Cells and Solar Modules
Authors: Julius Denafas, Irina Kliopova, Gintaras Denafas
Abstract:
Three opportunities for implementation of industrial ecology principles in the real industrial production of c-Si solar cells and modules are presented in this study. It includes: material flow dematerialisation, product modification and industrial symbiosis. Firstly, it is shown how the collaboration between R&D institutes and industry helps to achieve significant reduction of material consumption by a) refuse from phosphor silicate glass cleaning process and b) shortening of SiNx coating production step. This work was performed in the frame of Eco-Solar project, where Soli Tek R&D is collaborating together with the partners from ISC-Konstanz institute. Secondly, it was shown how the modification of solar module design can reduce the CO2 footprint for this product and enhance waste prevention. It was achieved by implementing a frameless glass/glass solar module design instead of glass/backsheet with aluminium frame. Such a design change is possible without purchasing new equipment and without loss of main product properties like efficiency, rigidity and longevity. Thirdly, industrial symbiosis in the solar cell production is possible in such case when manufacturing waste (silicon wafer and solar cell breakage) are collected, sorted and supplied as raw-materials to other companies involved in the production chain of c-Si solar cells. The obtained results showed that solar cells produced from recycled silicon can have a comparable electrical parameters like produced from standard, commercial silicon wafers. The above mentioned work was performed at solar cell producer Soli Tek R&D in the frame of H2020 projects CABRISS and Eco-Solar.Keywords: solar cells and solar modules, manufacturing, waste prevention, recycling
Procedia PDF Downloads 2143178 Forming-Free Resistive Switching Effect in ZnₓTiᵧHfzOᵢ Nanocomposite Thin Films for Neuromorphic Systems Manufacturing
Authors: Vladimir Smirnov, Roman Tominov, Vadim Avilov, Oleg Ageev
Abstract:
The creation of a new generation micro- and nanoelectronics elements opens up unlimited possibilities for electronic devices parameters improving, as well as developing neuromorphic computing systems. Interest in the latter is growing up every year, which is explained by the need to solve problems related to the unstructured classification of data, the construction of self-adaptive systems, and pattern recognition. However, for its technical implementation, it is necessary to fulfill a number of conditions for the basic parameters of electronic memory, such as the presence of non-volatility, the presence of multi-bitness, high integration density, and low power consumption. Several types of memory are presented in the electronics industry (MRAM, FeRAM, PRAM, ReRAM), among which non-volatile resistive memory (ReRAM) is especially distinguished due to the presence of multi-bit property, which is necessary for neuromorphic systems manufacturing. ReRAM is based on the effect of resistive switching – a change in the resistance of the oxide film between low-resistance state (LRS) and high-resistance state (HRS) under an applied electric field. One of the methods for the technical implementation of neuromorphic systems is cross-bar structures, which are ReRAM cells, interconnected by cross data buses. Such a structure imitates the architecture of the biological brain, which contains a low power computing elements - neurons, connected by special channels - synapses. The choice of the ReRAM oxide film material is an important task that determines the characteristics of the future neuromorphic system. An analysis of literature showed that many metal oxides (TiO2, ZnO, NiO, ZrO2, HfO2) have a resistive switching effect. It is worth noting that the manufacture of nanocomposites based on these materials allows highlighting the advantages and hiding the disadvantages of each material. Therefore, as a basis for the neuromorphic structures manufacturing, it was decided to use ZnₓTiᵧHfzOᵢ nanocomposite. It is also worth noting that the ZnₓTiᵧHfzOᵢ nanocomposite does not need an electroforming, which degrades the parameters of the formed ReRAM elements. Currently, this material is not well studied, therefore, the study of the effect of resistive switching in forming-free ZnₓTiᵧHfzOᵢ nanocomposite is an important task and the goal of this work. Forming-free nanocomposite ZnₓTiᵧHfzOᵢ thin film was grown by pulsed laser deposition (Pioneer 180, Neocera Co., USA) on the SiO2/TiN (40 nm) substrate. Electrical measurements were carried out using a semiconductor characterization system (Keithley 4200-SCS, USA) with W probes. During measurements, TiN film was grounded. The analysis of the obtained current-voltage characteristics showed a resistive switching from HRS to LRS resistance states at +1.87±0.12 V, and from LRS to HRS at -2.71±0.28 V. Endurance test shown that HRS was 283.21±32.12 kΩ, LRS was 1.32±0.21 kΩ during 100 measurements. It was shown that HRS/LRS ratio was about 214.55 at reading voltage of 0.6 V. The results can be useful for forming-free nanocomposite ZnₓTiᵧHfzOᵢ films in neuromorphic systems manufacturing. This work was supported by RFBR, according to the research project № 19-29-03041 mk. The results were obtained using the equipment of the Research and Education Center «Nanotechnologies» of Southern Federal University.Keywords: nanotechnology, nanocomposites, neuromorphic systems, RRAM, pulsed laser deposition, resistive switching effect
Procedia PDF Downloads 1323177 Multi-Objective Optimization for Aircraft Fleet Management: A Parametric Approach
Authors: Xin-Yu Li, Dung-Ying Lin
Abstract:
Fleet availability is a crucial indicator for an aircraft fleet. However, in practice, fleet planning involves many resource and safety constraints, such as annual and monthly flight training targets and maximum engine usage limits. Due to safety considerations, engines must be removed for mandatory maintenance and replacement of key components. This situation is known as the "threshold." The annual number of thresholds is a key factor in maintaining fleet availability. However, the traditional method heavily relies on experience and manual planning, which may result in ineffective engine usage and affect the flight missions. This study aims to address the challenges of fleet planning and availability maintenance in aircraft fleets with resource and safety constraints. The goal is to effectively optimize engine usage and maintenance tasks. This study has four objectives: minimizing the number of engine thresholds, minimizing the monthly lack of flight hours, minimizing the monthly excess of flight hours, and minimizing engine disassembly frequency. To solve the resulting formulation, this study uses parametric programming techniques and ϵ-constraint method to reformulate multi-objective problems into single-objective problems, efficiently generating Pareto fronts. This method is advantageous when handling multiple conflicting objectives. It allows for an effective trade-off between these competing objectives. Empirical results and managerial insights will be provided.Keywords: aircraft fleet, engine utilization planning, multi-objective optimization, parametric method, Pareto optimality
Procedia PDF Downloads 293176 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 693175 Implementation of Social Network Analysis to Analyze the Dependency between Construction Bid Packages
Authors: Kawalpreet Kaur, Panagiotis Mitropoulos
Abstract:
The division of the project scope into work packages is the most important step in the preconstruction phase of construction projects. The work division determines the scope and complexity of each bid package, resulting in dependencies between project participants performing these work packages. The coordination between project participants is necessary because of these dependencies. Excessive dependencies between the bid packages create coordination difficulties, leading to delays, added costs, and contractual friction among project participants. However, the literature on construction provides limited knowledge regarding work structuring approaches, issues, and challenges. Manufacturing industry literature provides a systematic approach to defining the project scope into work packages, and the implementation of social network analysis (SNA) in manufacturing is an effective approach to defining and analyzing the divided scope of work at the dependencies level. This paper presents a case study of implementing a similar approach using SNA in construction bid packages. The study uses SNA to analyze the scope of bid packages and determine the dependency between scope elements. The method successfully identifies the bid package with the maximum interaction with other trade contractors and the scope elements that are crucial for project performance. The analysis provided graphical and quantitative information on bid package dependencies. The study can be helpful in performing an analysis to determine the dependencies between bid packages and their scope elements and how these scope elements are critical for project performance. The study illustrates the potential use of SNA as a systematic approach to analyzing bid package dependencies in construction projects, which can guide the division of crucial scope elements to minimize negative impacts on project performance.Keywords: work structuring, bid packages, work breakdown, project participants
Procedia PDF Downloads 803174 Assessment of Ecosystem Readiness for Adoption of Circularity: A Multi-Case Study Analysis of Textile Supply Chain in Pakistan
Authors: Azhar Naila, Steuer Benjamin
Abstract:
Over-exploitation of resources and the burden on natural systems have provoked worldwide concerns about the potential resource as well as supply risks in the future. It has been estimated that the consumption of materials and resources will double by 2060, substantially mounting the amount of waste and emissions produced by individuals, organizations, and businesses, which necessitates sustainable technological innovations to address the problem. Therefore, there is a need to design products and services purposefully for material resource efficiency. This directs us toward the conceptualization and implementation of the ‘Circular Economy (CE),’ which has gained considerable attention among policymakers, researchers, and businesses in the past decade. A large amount of literature focuses on the concept of CE. However, contextual empirical research on the need to embrace CE in an emerging economy like Pakistan is still scarce, where the traditional economic model of take-make-dispose is quite common. Textile exports account for approximately 61% of Pakistan's total exports, and the industry provides employment for about 40% of the country's total industrial workforce. The industry provides job opportunities to above 10 million farmers, with cotton as the main crop of Pakistan. Consumers, companies, as well as the government have explored very limited CE potential in the country. This gap has motivated us to carry out the present study. The study is based on a mixed method approach, for which key informant interviews have been conducted to get insight into the present situation of the ecosystem readiness for the adoption of CE in 20 textile manufacturing industries. The subject study has been conducted on the following areas i) the level of understanding of the CE concept among key stakeholders in the textile manufacturing industry ii) Companies are pushing boundaries to invest in circularity-based initiatives, exploring the depths of risk-taking iii) the current national policy framework support the adoption of CE. Qualitative assessment has been undertaken using MAXQDA to analyze the data received after the key informant interviews. The data has been transcribed and coded for further analysis. The results show that most of the key stakeholders have a clear understanding of the concept, whereas few consider it to be only relevant to the end-of-life treatment of waste generated from the industry. Non-governmental organizations have been observed to be key players in creating awareness among the manufacturing industries. Maximum companies have shown their consent to invest in initiatives related to the adoption of CE. Whereas a few consider themselves far behind the race due to a lack of financial resources and support from responsible institutions. Mostly, the industries have an ambitious vision for integrating CE into the company’s policy but seem not to be ready to take any significant steps to nurture a culture for experimentation. However, the government is not playing any vital role in the transition towards CE; rather, they have been busy with the state’s uncertain political situation. Presently, Pakistan does not have any policy framework that supports the transition towards CE. Acknowledging the present landscape a well-informed CE transition is immediately required.Keywords: circular economy, textile supply chain, textile manufacturing industries, resource efficiency, ecosystem readiness, multi-case study analysis
Procedia PDF Downloads 543173 Flow Field Optimization for Proton Exchange Membrane Fuel Cells
Authors: Xiao-Dong Wang, Wei-Mon Yan
Abstract:
The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection
Procedia PDF Downloads 2973172 A Mixed-Integer Nonlinear Program to Optimally Pace and Fuel Ultramarathons
Authors: Kristopher A. Pruitt, Justin M. Hill
Abstract:
The purpose of this research is to determine the pacing and nutrition strategies which minimize completion time and carbohydrate intake for athletes competing in ultramarathon races. The model formulation consists of a two-phase optimization. The first-phase mixed-integer nonlinear program (MINLP) determines the minimum completion time subject to the altitude, terrain, and distance of the race, as well as the mass and cardiovascular fitness of the athlete. The second-phase MINLP determines the minimum total carbohydrate intake required for the athlete to achieve the completion time prescribed by the first phase, subject to the flow of carbohydrates through the stomach, liver, and muscles. Consequently, the second phase model provides the optimal pacing and nutrition strategies for a particular athlete for each kilometer of a particular race. Validation of the model results over a wide range of athlete parameters against completion times for real competitive events suggests strong agreement. Additionally, the kilometer-by-kilometer pacing and nutrition strategies, the model prescribes for a particular athlete suggest unconventional approaches could result in lower completion times. Thus, the MINLP provides prescriptive guidance that athletes can leverage when developing pacing and nutrition strategies prior to competing in ultramarathon races. Given the highly-variable topographical characteristics common to many ultramarathon courses and the potential inexperience of many athletes with such courses, the model provides valuable insight to competitors who might otherwise fail to complete the event due to exhaustion or carbohydrate depletion.Keywords: nutrition, optimization, pacing, ultramarathons
Procedia PDF Downloads 1903171 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer
Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın
Abstract:
We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer
Procedia PDF Downloads 2483170 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization
Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder
Abstract:
In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening
Procedia PDF Downloads 3023169 Use of Galileo Advanced Features in Maritime Domain
Authors: Olivier Chaigneau, Damianos Oikonomidis, Marie-Cecile Delmas
Abstract:
GAMBAS (Galileo Advanced features for the Maritime domain: Breakthrough Applications for Safety and security) is a project funded by the European Space Program Agency (EUSPA) aiming at identifying the search-and-rescue and ship security alert system needs for maritime users (including operators and fishing stakeholders) and developing operational concepts to answer these needs. The general objective of the GAMBAS project is to support the deployment of Galileo exclusive features in the maritime domain in order to improve safety and security at sea, detection of illegal activities and associated surveillance means, resilience to natural and human-induced emergency situations, and develop, integrate, demonstrate, standardize and disseminate these new associated capabilities. The project aims to demonstrate: improvement of the SAR (Search And Rescue) and SSAS (Ship Security Alert System) detection and response to maritime distress through the integration of new features into the beacon for SSAS in terms of cost optimization, user-friendly aspects, integration of Galileo and OS NMA (Open Service Navigation Message Authentication) reception for improved authenticated localization performance and reliability, and at sea triggering capabilities, optimization of the responsiveness of RCCs (Rescue Co-ordination Centre) towards the distress situations affecting vessels, the adaptation of the MCCs (Mission Control Center) and MEOLUT (Medium Earth Orbit Local User Terminal) to the data distribution of SSAS alerts.Keywords: Galileo new advanced features, maritime, safety, security
Procedia PDF Downloads 943168 Optimization of Chitosan Membrane Production Parameters for Zinc Ion Adsorption
Authors: Peter O. Osifo, Hein W. J. P. Neomagus, Hein V. D. Merwe
Abstract:
Chitosan materials from different sources of raw materials were characterized in order to determine optimal preparation conditions and parameters for membrane production. The membrane parameters such as molecular weight, viscosity, and degree of deacetylation were used to evaluate the membrane performance for zinc ion adsorption. The molecular weight of the chitosan was found to influence the viscosity of the chitosan/acetic acid solution. An increase in molecular weight (60000-400000 kg.kmol-1) of the chitosan resulted in a higher viscosity (0.05-0.65 Pa.s) of the chitosan/acetic acid solution. The effect of the degree of deacetylation on the viscosity is not significant. The effect of the membrane production parameters (chitosan- and acetic acid concentration) on the viscosity is mainly determined by the chitosan concentration. For higher chitosan concentrations, a membrane with a better adsorption capacity was obtained. The membrane adsorption capacity increases from 20-130 mg Zn per gram of wet membrane for an increase in chitosan concentration from 2-7 mass %. Chitosan concentrations below 2 and above 7.5 mass % produced membranes that lack good mechanical properties. The optimum manufacturing conditions including chitosan concentration, acetic acid concentration, sodium hydroxide concentration and crosslinking for chitosan membranes within the workable range were defined by the criteria of adsorption capacity and flux. The adsorption increases (50-120 mg.g-1) as the acetic acid concentration increases (1-7 mass %). The sodium hydroxide concentration seems not to have a large effect on the adsorption characteristics of the membrane however, a maximum was reached at a concentration of 5 mass %. The adsorption capacity per gram of wet membrane strongly increases with the chitosan concentration in the acetic acid solution but remains constant per gram of dry chitosan. The optimum solution for membrane production consists of 7 mass % chitosan and 4 mass % acetic acid in de-ionised water. The sodium hydroxide concentration for phase inversion is at optimum at 5 mass %. The optimum cross-linking time was determined to be 6 hours (Percentage crosslinking of 18%). As the cross-linking time increases the adsorption of the zinc decreases (150-50 mg.g-1) in the time range of 0 to 12 hours. After a crosslinking time of 12 hours, the adsorption capacity remains constant. This trend is comparable to the effect on flux through the membrane. The flux decreases (10-3 L.m-2.hr-1) with an increase in crosslinking time range of 0 to 12 hours and reaches a constant minimum after 12 hours.Keywords: chitosan, membrane, waste water, heavy metal ions, adsorption
Procedia PDF Downloads 3883167 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures
Procedia PDF Downloads 3563166 Effect of Pulsed Electrical Field on the Mechanical Properties of Raw, Blanched and Fried Potato Strips
Authors: Maria Botero-Uribe, Melissa Fitzgerald, Robert Gilbert, Kim Bryceson, Jocelyn Midgley
Abstract:
French fry manufacturing involves a series of processes in which structural properties of potatoes are modified to produce crispy french fries which consumers enjoy. In addition to the traditional french fry manufacturing process, the industry is applying a relatively new process called pulsed electrical field (PEF) to the whole potatoes. There is a wealth of information on the technical treatment conditions of PEF, however, there is a lack of information about its effect on the structural properties that affect texture and its synergistic interactions with the other manufacturing steps of french fry production. The effect of PEF on starch gelatinisation properties of Russet Burbank potato was measured using a Differential Scanning Calorimeter. Cation content (K+, Ca2+ and Mg2+) was determined by inductively coupled plasma optical emission spectrophotometry. Firmness, and toughness of raw and blanched potatoes were determined in an uniaxial compression test. Moisture content was determined in a vacuum oven and oil content was measured using the soxhlet system with hexane. The final texture of the french fries – crispness - was determined using a three bend point test. Triangle tests were conducted to determine if consumers were able to perceive sensory differences between French fries that were PEF treated and those without treatment. The concentration of K+, Ca2+ and Mg2+ decreased significantly in the raw potatoes after the PEF treatment. The PEF treatment significantly increased modulus of elasticity, compression strain, compression force and toughness in the raw potato. The PEF-treated raw potato were firmer and stiffer, and its structure integrity held together longer, resisted higher force before fracture and stretched further than the untreated ones. The strain stress relationship exhibited by the PEF-treated raw potato could be due to an increase in the permeability of the plasmalema and tonoplasm allowing Ca2+ and Mg2+ cations to reach the cell wall and middle lamella, and be available for cross linking with the pectin molecule. The PEF-treated raw potato exhibited a slightly higher onset gelatinisation temperatures, similar peak temperatures and lower gelatinisation ranges than the untreated raw potatoes. The final moisture content of the french fries was not significantly affected by the PEF treatment. Oil content in the PEF- treated potatoes was lower than the untreated french fries, however, not statistically significant at 5 %. The PEF treatment did not have an overall significant effect on french fry crispness (modulus of elasticity), flexure stress or strain. The triangle tests show that most consumers could not detect a difference between French fries that received a PEF treatment from those that did not.Keywords: french fries, mechanical properties, PEF, potatoes
Procedia PDF Downloads 2363165 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization
Procedia PDF Downloads 3723164 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 469