Search results for: optimization methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17890

Search results for: optimization methods

16090 Floating Oral in Situ Gelling System of Anticancer Drug

Authors: Umme Hani, Mohammed Rahmatulla, Mohammed Ghazwani, Ali Alqahtani, Yahya Alhamhoom

Abstract:

Background and introduction: Neratinib is a potent anticancer drug used for the treatment of breast cancer. It is poorly soluble at higher pH, which tends to minimize the therapeutic effects in the lower gastrointestinal tract (GIT) leading to poor bioavailability. An attempt has been made to prepare and develop a gastro-retentive system of Neratinib to improve the drug bioavailability in the GIT by enhancing the gastric retention time. Materials and methods: In the present study a three-factor at two-level (23) factorial design based optimization was used to inspect the effects of three independent variables (factors) such as sodium alginate (A), sodium bicarbonate (B) and sodium citrate (C) on the dependent variables like in vitro gelation, in vitro floating, water uptake and percentage drug release. Results: All the formulations showed pH in the range 6.7 ±0.25 to 7.4 ±0.24, percentage drug content was observed to be 96.3±0.27 to 99.5 ±0.28%, in vitro gelation observed as gelation immediate remains for an extended period. Percentage of water uptake was in the range between 9.01±0.15 to 31.01±0.25%, floating lag time was estimated form 7±0.39 to 57±0.36 sec. F4 and F5 showed floating even after 12hrs. All formulations showed a release of around 90% drug release within 12hr. It was observed that the selected independent variables affect the dependent variables. Conclusion: The developed system may be a promising and alternative approach to augment gastric retention of drugs and enhances the therapeutic efficacy of the drug.

Keywords: neratinib, 2³ factorial design, sodium alginate, floating, in situ gelling system

Procedia PDF Downloads 163
16089 Review of Research on Effectiveness Evaluation of Technology Innovation Policy

Authors: Xue Wang, Li-Wei Fan

Abstract:

The technology innovation has become the driving force of social and economic development and transformation. The guidance and support of public policies is an important condition to promote the realization of technology innovation goals. Policy effectiveness evaluation is instructive in policy learning and adjustment. This paper reviews existing studies and systematically evaluates the effectiveness of policy-driven technological innovation. We used 167 articles from WOS and CNKI databases as samples to clarify the measurement of technological innovation indicators and analyze the classification and application of policy evaluation methods. In general, technology innovation input and technological output are the two main aspects of technological innovation index design, among which technological patents are the focus of research, the number of patents reflects the scale of technological innovation, and the quality of patents reflects the value of innovation from multiple aspects. As for policy evaluation methods, statistical analysis methods are applied to the formulation, selection and evaluation of the after-effect of policies to analyze the effect of policy implementation qualitatively and quantitatively. The bibliometric methods are mainly based on the public policy texts, discriminating the inter-government relationship and the multi-dimensional value of the policy. Decision analysis focuses on the establishment and measurement of the comprehensive evaluation index system of public policy. The economic analysis methods focus on the performance and output of technological innovation to test the policy effect. Finally, this paper puts forward the prospect of the future research direction.

Keywords: technology innovation, index, policy effectiveness, evaluation of policy, bibliometric analysis

Procedia PDF Downloads 70
16088 3D Numerical Studies and Design Optimization of a Swallowtail Butterfly with Twin Tail

Authors: Arunkumar Balamurugan, G. Soundharya Lakshmi, V. Thenmozhi, M. Jegannath, V. R. Sanal Kumar

Abstract:

Aerodynamics of insects is of topical interest in aeronautical industries due to its wide applications on various types of Micro Air Vehicles (MAVs). Note that the MAVs are having smaller geometric dimensions operate at significantly lower speeds on the order of 10 m/s and their Reynolds numbers range is approximately 1,50,000 or lower. In this paper, numerical study has been carried out to capture the flow physics of a biological inspired Swallowtail Butterfly with fixed wing having twin tail at a flight speed of 10 m/s. Comprehensive numerical simulations have been carried out on swallow butterfly with twin tail flying at a speed of 10 m/s with uniform upper and lower angles of attack in both lateral and longitudinal position for identifying the best wing orientation with better aerodynamic efficiency. Grid system in the computational domain is selected after a detailed grid refinement exercises. Parametric analytical studies have been carried out with different lateral and longitudinal angles of attack for finding the better aerodynamic efficiency at the same flight speed. The results reveal that lift coefficient significantly increases with marginal changes in the longitudinal angle and vice versa. But in the case of drag coefficient the conventional changes have been noticed, viz., drag increases at high longitudinal angles. We observed that the change of twin tail section has a significant impact on the formation of vortices and aerodynamic efficiency of the MAV’s. We concluded that for every lateral angle there is an exact longitudinal orientation for the existence of an aerodynamically efficient flying condition of any MAV. This numerical study is a pointer towards for the design optimization of Twin tail MAVs with flapping wings.

Keywords: aerodynamics of insects, MAV, swallowtail butterfly, twin tail MAV design

Procedia PDF Downloads 395
16087 On the Solution of Fractional-Order Dynamical Systems Endowed with Block Hybrid Methods

Authors: Kizito Ugochukwu Nwajeri

Abstract:

This paper presents a distinct approach to solving fractional dynamical systems using hybrid block methods (HBMs). Fractional calculus extends the concept of derivatives and integrals to non-integer orders and finds increasing application in fields such as physics, engineering, and finance. However, traditional numerical techniques often struggle to accurately capture the complex behaviors exhibited by these systems. To address this challenge, we develop HBMs that integrate single-step and multi-step methods, enabling the simultaneous computation of multiple solution points while maintaining high accuracy. Our approach employs polynomial interpolation and collocation techniques to derive a system of equations that effectively models the dynamics of fractional systems. We also directly incorporate boundary and initial conditions into the formulation, enhancing the stability and convergence properties of the numerical solution. An adaptive step-size mechanism is introduced to optimize performance based on the local behavior of the solution. Extensive numerical simulations are conducted to evaluate the proposed methods, demonstrating significant improvements in accuracy and efficiency compared to traditional numerical approaches. The results indicate that our hybrid block methods are robust and versatile, making them suitable for a wide range of applications involving fractional dynamical systems. This work contributes to the existing literature by providing an effective numerical framework for analyzing complex behaviors in fractional systems, thereby opening new avenues for research and practical implementation across various disciplines.

Keywords: fractional calculus, numerical simulation, stability and convergence, Adaptive step-size mechanism, collocation methods

Procedia PDF Downloads 43
16086 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 322
16085 Estimation of Effective Radiation Dose Following Computed Tomography Urography at Aminu Kano Teaching Hospital, Kano Nigeria

Authors: Idris Garba, Aisha Rabiu Abdullahi, Mansur Yahuza, Akintade Dare

Abstract:

Background: CT urography (CTU) is efficient radiological examination for the evaluation of the urinary system disorders. However, patients are exposed to a significant radiation dose which is in a way associated with increased cancer risks. Objectives: To determine Computed Tomography Dose Index following CTU, and to evaluate organs equivalent doses. Materials and Methods: A prospective cohort study was carried at a tertiary institution located in Kano northwestern. Ethical clearance was sought and obtained from the research ethics board of the institution. Demographic, scan parameters and CT radiation dose data were obtained from patients that had CTU procedure. Effective dose, organ equivalent doses, and cancer risks were estimated using SPSS statistical software version 16 and CT dose calculator software. Result: A total of 56 patients were included in the study, consisting of 29 males and 27 females. The common indication for CTU examination was found to be renal cyst seen commonly among young adults (15-44yrs). CT radiation dose values in DLP, CTDI and effective dose for CTU were 2320 mGy cm, CTDIw 9.67 mGy and 35.04 mSv respectively. The probability of cancer risks was estimated to be 600 per a million CTU examinations. Conclusion: In this study, the radiation dose for CTU is considered significantly high, with increase in cancer risks probability. Wide radiation dose variations between patient doses suggest that optimization is not fulfilled yet. Patient radiation dose estimate should be taken into consideration when imaging protocols are established for CT urography.

Keywords: CT urography, cancer risks, effective dose, radiation exposure

Procedia PDF Downloads 345
16084 Searching the Relationship among Components that Contribute to Interactive Plight and Educational Execution

Authors: Shri Krishna Mishra

Abstract:

In an educational context, technology can prompt interactive plight only when it is used in conjunction with interactive plight methods. This study, therefore, examines the relationships among components that contribute to higher levels of interactive plight and execution, such as interactive Plight methods, technology, intrinsic motivation and deep learning. 526 students participated in this study. With structural equation modelling, the authors test the conceptual model and identify satisfactory model fit. The results indicate that interactive Plight methods, technology and intrinsic motivation have significant relationship with interactive Plight; deep learning mediates the relationships of the other variables with Execution.

Keywords: searching the relationship among components, contribute to interactive plight, educational execution, intrinsic motivation

Procedia PDF Downloads 454
16083 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography

Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner

Abstract:

Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.

Keywords: CBCT, C-arm, reconstruction, trajectory optimization

Procedia PDF Downloads 132
16082 A Study of Quality Assurance and Unit Verification Methods in Safety Critical Environment

Authors: Miklos Taliga

Abstract:

In the present case study we examined the development and testing methods of systems that contain safety-critical elements in different industrial fields. Consequentially, we observed the classical object-oriented development and testing environment, as both medical technology and automobile industry approaches the development of safety critical elements that way. Subsequently, we examined model-based development. We introduce the quality parameters that define development and testing. While taking modern agile methodology (scrum) into consideration, we examined whether and to what extent the methodologies we found fit into this environment.

Keywords: safety-critical elements, quality managent, unit verification, model base testing, agile methods, scrum, metamodel, object-oriented programming, field specific modelling, sprint, user story, UML Standard

Procedia PDF Downloads 585
16081 Cooperative Coevolution for Neuro-Evolution of Feed Forward Networks for Time Series Prediction Using Hidden Neuron Connections

Authors: Ravneil Nand

Abstract:

Cooperative coevolution uses problem decomposition methods to solve a larger problem. The problem decomposition deals with breaking down the larger problem into a number of smaller sub-problems depending on their method. Different problem decomposition methods have their own strengths and limitations depending on the neural network used and application problem. In this paper we are introducing a new problem decomposition method known as Hidden-Neuron Level Decomposition (HNL). The HNL method is competing with established problem decomposition method in time series prediction. The results show that the proposed approach has improved the results in some benchmark data sets when compared to the standalone method and has competitive results when compared to methods from literature.

Keywords: cooperative coevaluation, feed forward network, problem decomposition, neuron, synapse

Procedia PDF Downloads 335
16080 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)

Authors: Silvia Arrate, Waldo Salud, Eloy París

Abstract:

The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.

Keywords: cutting tools, data science, prediction, TBM, wear

Procedia PDF Downloads 49
16079 Mariculture Trials of the Philippine Blue Sponge Xestospongia sp.

Authors: Clairecynth Yu, Geminne Manzano

Abstract:

The mariculture potential of the Philippine blue sponge, Xestospongia sp. was assessed through the pilot sponge culture in the open-sea at two different biogeographic regions in the Philippines. Thirty explants were randomly allocated for the Puerto Galera, Oriental Mindoro culture setup and the other nine were transported to Lucero, Bolinao, Pangasinan. Two different sponge culture methods of the sponge explants- the lantern and the wall method, were employed to assess the production of the Renieramycin M. Both methods have shown to be effective in growing the sponge explants and that the Thin Layer Chromatography (TLC) results have shown that Renieramycin M is present on the sponges. The effect of partial harvesting in the growth and survival rates of the blue sponge in the Puerto Galera setup was also determined. Results showed that a higher growth rate was observed on the partially harvested explants on both culture methods as compared to the unharvested explants.

Keywords: chemical ecology, porifera, sponge, Xestospongia sp.

Procedia PDF Downloads 273
16078 Framework for Integrating Big Data and Thick Data: Understanding Customers Better

Authors: Nikita Valluri, Vatcharaporn Esichaikul

Abstract:

With the popularity of data-driven decision making on the rise, this study focuses on providing an alternative outlook towards the process of decision-making. Combining quantitative and qualitative methods rooted in the social sciences, an integrated framework is presented with a focus on delivering a much more robust and efficient approach towards the concept of data-driven decision-making with respect to not only Big data but also 'Thick data', a new form of qualitative data. In support of this, an example from the retail sector has been illustrated where the framework is put into action to yield insights and leverage business intelligence. An interpretive approach to analyze findings from both kinds of quantitative and qualitative data has been used to glean insights. Using traditional Point-of-sale data as well as an understanding of customer psychographics and preferences, techniques of data mining along with qualitative methods (such as grounded theory, ethnomethodology, etc.) are applied. This study’s final goal is to establish the framework as a basis for providing a holistic solution encompassing both the Big and Thick aspects of any business need. The proposed framework is a modified enhancement in lieu of traditional data-driven decision-making approach, which is mainly dependent on quantitative data for decision-making.

Keywords: big data, customer behavior, customer experience, data mining, qualitative methods, quantitative methods, thick data

Procedia PDF Downloads 162
16077 Comparison of Different Extraction Methods for the Determination of Polyphenols

Authors: Senem Suna

Abstract:

Extraction of bioactive compounds from several food/food products comes as an important topic and new trend related with health promoting effects. As a result of the increasing interest in natural foods, different methods are used for the acquisition of these components especially polyphenols. However, special attention has to be paid to the selection of proper techniques or several processing technologies (supercritical fluid extraction, microwave-assisted extraction, ultrasound-assisted extraction, powdered extracts production) for each kind of food to get maximum benefit as well as the obtainment of phenolic compounds. In order to meet consumer’s demand for healthy food and the management of quality and safety requirements, advanced research and development are needed. In this review, advantages, and disadvantages of different extraction methods, their opportunities to be used in food industry and the effects of polyphenols are mentioned in details. Consequently, with the evaluation of the results of several studies, the selection of the most suitable food specific method was aimed.

Keywords: bioactives, extraction, powdered extracts, supercritical fluid extraction

Procedia PDF Downloads 239
16076 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: deep learning, field programmable gate array, FPGA, hardware accelerator, convolutional neural networks, CNN

Procedia PDF Downloads 128
16075 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate

Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim

Abstract:

Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.

Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic

Procedia PDF Downloads 639
16074 Effects of Storage Methods on Proximate Compositions of African Yam Bean (Sphenostylis stenocarpa) Seeds

Authors: Iyabode A. Kehinde, Temitope A. Oyedele, Clement G. Afolabi

Abstract:

One of the limitations of African yam bean (AYB) (Sphenostylis sternocarpa) is poor storage ability due to the adverse effect of seed-borne fungi. This study was conducted to examine the effects of storage methods on the nutritive composition of AYB seeds stored in three types of storage materials viz; Jute bags, Polypropylene bags, and Plastic Bowls. Freshly harvested seeds of AYB seeds were stored in all the storage materials for 6 months using 2 × 3 factorial (2 AYB cultivars and 3 storage methods) in 3 replicates. The proximate analysis of the stored AYB seeds was carried out at 3 and 6 months after storage using standard methods. The temperature and relative humidity of the storeroom was recorded monthly with Kestrel pocket weather tracker 4000. Seeds stored in jute bags gave the best values for crude protein (24.87%), ash (5.69%) and fat content (6.64%) but recorded least values for crude fibre (2.55%), carbohydrate (50.86%) and moisture content (12.68%) at the 6th month of storage. The temperature of the storeroom decreased from 32.9ºC - 28.3ºC, while the relative humidity increased from 78% - 86%. Decreased incidence of field fungi namely: Rhizopus oryzae, Aspergillus flavus, Geotricum candidum, Aspergillus fumigatus and Mucor meihei was accompanied by the increase in storage fungi viz: Apergillus niger, Mucor hiemalis, Penicillium espansum and Penicillium atrovenetum with prolonged storage. The study showed that of the three storage materials jute bag was more effective at preserving AYB seeds.

Keywords: storage methods, proximate composition, African Yam Bean, fungi

Procedia PDF Downloads 134
16073 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems

Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana

Abstract:

Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.

Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP

Procedia PDF Downloads 199
16072 Optimal Management of Forest Stands under Wind Risk in Czech Republic

Authors: Zohreh Mohammadi, Jan Kaspar, Peter Lohmander, Robert Marusak, Harald Vacik, Ljusk Ola Eriksson

Abstract:

Storms are important damaging agents in European forest ecosystems. In the latest decades, significant economic losses in European forestry occurred due to storms. This study investigates the problem of optimal harvest planning when forest stands risk to be felled by storms. One of the most applicable mathematical methods which are being used to optimize forest management is stochastic dynamic programming (SDP). This method belongs to the adaptive optimization class. Sequential decisions, such as harvest decisions, can be optimized based on sequential information about events that cannot be perfectly predicted, such as the future storms and the future states of wind protection from other forest stands. In this paper, stochastic dynamic programming is used to maximize the expected present value of the profits from an area consisting of several forest stands. The region of analysis is the Czech Republic. The harvest decisions, in a particular time period, should be simultaneously taken in all neighbor stands. The reason is that different stands protect each other from possible winds. The optimal harvest age of a particular stand is a function of wind speed and different wind protection effects. The optimal harvest age often decreases with wind speed, but it cannot be determined for one stand at a time. When we consider a particular stand, this stand also protects other stands. Furthermore, the particular stand is protected by neighbor stands. In some forest stands, it may even be rational to increase the harvest age under the influence of stronger winds, in order to protect more valuable stands in the neighborhood. It is important to integrate wind risk in forestry decision-making.

Keywords: Czech republic, forest stands, stochastic dynamic programming, wind risk

Procedia PDF Downloads 147
16071 Generation of Quasi-Measurement Data for On-Line Process Data Analysis

Authors: Hyun-Woo Cho

Abstract:

For ensuring the safety of a manufacturing process one should quickly identify an assignable cause of a fault in an on-line basis. To this end, many statistical techniques including linear and nonlinear methods have been frequently utilized. However, such methods possessed a major problem of small sample size, which is mostly attributed to the characteristics of empirical models used for reference models. This work presents a new method to overcome the insufficiency of measurement data in the monitoring and diagnosis tasks. Some quasi-measurement data are generated from existing data based on the two indices of similarity and importance. The performance of the method is demonstrated using a real data set. The results turn out that the presented methods are able to handle the insufficiency problem successfully. In addition, it is shown to be quite efficient in terms of computational speed and memory usage, and thus on-line implementation of the method is straightforward for monitoring and diagnosis purposes.

Keywords: data analysis, diagnosis, monitoring, process data, quality control

Procedia PDF Downloads 482
16070 Using Learning Apps in the Classroom

Authors: Janet C. Read

Abstract:

UClan set collaboration with Lingokids to assess the Lingokids learning app's impact on learning outcomes in classrooms in the UK for children with ages ranging from 3 to 5 years. Data gathered during the controlled study with 69 children includes attitudinal data, engagement, and learning scores. Data shows that children enjoyment while learning was higher among those children using the game-based app compared to those children using other traditional methods. It’s worth pointing out that engagement when using the learning app was significantly higher than other traditional methods among older children. According to existing literature, there is a direct correlation between engagement, motivation, and learning. Therefore, this study provides relevant data points to conclude that Lingokids learning app serves its purpose of encouraging learning through playful and interactive content. That being said, we believe that learning outcomes should be assessed with a wider range of methods in further studies. Likewise, it would be beneficial to assess the level of usability and playability of the app in order to evaluate the learning app from other angles.

Keywords: learning app, learning outcomes, rapid test activity, Smileyometer, early childhood education, innovative pedagogy

Procedia PDF Downloads 71
16069 Feedback from Experiments on Managing Methods against Japanese Knotweed on a River Appendix of the RhôNe between 2015 and 2020

Authors: William Brasier, Nicolas Rabin, Celeste Joly

Abstract:

Japanese knotweed (Fallopia japonica) is very present on the banks of the Rhone, colonizing more and more areas along the river. The Compagnie Nationale du Rhone (C.N.R.), which manages the river, has experimented with several control techniques in recent years. Since 2015, 15 experimental plots have been monitored on the banks of a restored river appendix to measure the effect of three control methods: confinement by felt, repeated mowing and the planting of competing species and/or species with allelopathic power: Viburnum opulus, Rhamnus frangula, Sambucus ebulus and Juglans regia. Each year, the number of stems, the number of elderberry plants, the height of the plants and photographs were collected. After six years of monitoring, the results showed that the density of knotweed stems decreased by 50 to 90% on all plots. The control methods are sustainable and are gradually gaining in efficiency. The establishment of native plants coupled with regular manual maintenance can reduce the development of Japanese knotweed. Continued monitoring over the next few years will determine the kinetics of the total eradication (i.e. 0 stem/plot) of the Japanese knotweed by these methods.

Keywords: fallopia japonica, interspecific plant competition , Rhone river, riparian trees

Procedia PDF Downloads 132
16068 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison

Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes

Abstract:

Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.

Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution

Procedia PDF Downloads 215
16067 A New Approach for Improving Accuracy of Multi Label Stream Data

Authors: Kunal Shah, Swati Patel

Abstract:

Many real world problems involve data which can be considered as multi-label data streams. Efficient methods exist for multi-label classification in non streaming scenarios. However, learning in evolving streaming scenarios is more challenging, as the learners must be able to adapt to change using limited time and memory. Classification is used to predict class of unseen instance as accurate as possible. Multi label classification is a variant of single label classification where set of labels associated with single instance. Multi label classification is used by modern applications, such as text classification, functional genomics, image classification, music categorization etc. This paper introduces the task of multi-label classification, methods for multi-label classification and evolution measure for multi-label classification. Also, comparative analysis of multi label classification methods on the basis of theoretical study, and then on the basis of simulation was done on various data sets.

Keywords: binary relevance, concept drift, data stream mining, MLSC, multiple window with buffer

Procedia PDF Downloads 584
16066 Optimization of Platinum Utilization by Using Stochastic Modeling of Carbon-Supported Platinum Catalyst Layer of Proton Exchange Membrane Fuel Cells

Authors: Ali Akbar, Seungho Shin, Sukkee Um

Abstract:

The composition of catalyst layers (CLs) plays an important role in the overall performance and cost of the proton exchange membrane fuel cells (PEMFCs). Low platinum loading, high utilization, and more durable catalyst still remain as critical challenges for PEMFCs. In this study, a three-dimensional material network model is developed to visualize the nanostructure of carbon supported platinum Pt/C and Pt/VACNT catalysts in pursuance of maximizing the catalyst utilization. The quadruple-phase randomly generated CLs domain is formulated using quasi-random stochastic Monte Carlo-based method. This unique statistical approach of four-phase (i.e., pore, ionomer, carbon, and platinum) model is closely mimic of manufacturing process of CLs. Various CLs compositions are simulated to elucidate the effect of electrons, ions, and mass transport paths on the catalyst utilization factor. Based on simulation results, the effect of key factors such as porosity, ionomer contents and Pt weight percentage in Pt/C catalyst have been investigated at the represented elementary volume (REV) scale. The results show that the relationship between ionomer content and Pt utilization is in good agreement with existing experimental calculations. Furthermore, this model is implemented on the state-of-the-art Pt/VACNT CLs. The simulation results on Pt/VACNT based CLs show exceptionally high catalyst utilization as compared to Pt/C with different composition ratios. More importantly, this study reveals that the maximum catalyst utilization depends on the distance spacing between the carbon nanotubes for Pt/VACNT. The current simulation results are expected to be utilized in the optimization of nano-structural construction and composition of Pt/C and Pt/VACNT CLs.

Keywords: catalyst layer, platinum utilization, proton exchange membrane fuel cell, stochastic modeling

Procedia PDF Downloads 121
16065 Valorisation of Mango Seed: Response Surface Methodology Based Optimization of Starch Extraction from Mango Seeds

Authors: Tamrat Tesfaye, Bruce Sithole

Abstract:

Box-Behnken Response surface methodology was used to determine the optimum processing conditions that give maximum extraction yield and whiteness index from mango seed. The steeping time ranges from 2 to 12 hours and slurring of the steeped seed in sodium metabisulphite solution (0.1 to 0.5 w/v) was carried out. Experiments were designed according to Box-Behnken Design with these three factors and a total of 15 runs experimental variables of were analyzed. At linear level, the concentration of sodium metabisulphite had significant positive influence on percentage yield and whiteness index at p<0.05. At quadratic level, sodium metabisulphite concentration and sodium metabisulphite concentration2 had a significant negative influence on starch yield; sodium metabisulphite concentration and steeping time*temperature had significant (p<0.05) positive influence on whiteness index. The adjusted R2 above 0.8 for starch yield (0.906465) and whiteness index (0.909268) showed a good fit of the model with the experimental data. The optimum sodium metabisulphite concentration, steeping hours, and temperature for starch isolation with maximum starch yield (66.428%) and whiteness index (85%) as set goals for optimization with the desirability of 0.91939 was 0.255w/v concentration, 2hrs and 50 °C respectively. The determined experimental value of each response based on optimal condition was statistically in accordance with predicted levels at p<0.05. The Mango seeds are the by-products obtained during mango processing and possess disposal problem if not handled properly. The substitution of food based sizing agents with mango seed starch can contribute as pertinent resource deployment for value-added product manufacturing and waste utilization which might play significance role of food security in Ethiopia.

Keywords: mango, synthetic sizing agent, starch, extraction, textile, sizing

Procedia PDF Downloads 231
16064 Binary Decision Diagram Based Methods to Evaluate the Reliability of Systems Considering Failure Dependencies

Authors: Siqi Qiu, Yijian Zheng, Xin Guo Ming

Abstract:

In many reliability and risk analysis, failures of components are supposed to be independent. However, in reality, the ignorance of failure dependencies among components may render the results of reliability and risk analysis incorrect. There are two principal ways to incorporate failure dependencies in system reliability and risk analysis: implicit and explicit methods. In the implicit method, failure dependencies can be modeled by joint probabilities, correlation values or conditional probabilities. In the explicit method, certain types of dependencies can be modeled in a fault tree as mutually independent basic events for specific component failures. In this paper, explicit and implicit methods based on BDD will be proposed to evaluate the reliability of systems considering failure dependencies. The obtained results prove the equivalence of the proposed implicit and explicit methods. It is found that the consideration of failure dependencies decreases the reliability of systems. This observation is intuitive, because more components fail due to failure dependencies. The consideration of failure dependencies helps designers to reduce the dependencies between components during the design phase to make the system more reliable.

Keywords: reliability assessment, risk assessment, failure dependencies, binary decision diagram

Procedia PDF Downloads 472
16063 Computational Aided Approach for Strut and Tie Model for Non-Flexural Elements

Authors: Mihaja Razafimbelo, Guillaume Herve-Secourgeon, Fabrice Gatuingt, Marina Bottoni, Tulio Honorio-De-Faria

Abstract:

The challenge of the research is to provide engineering with a robust, semi-automatic method for calculating optimal reinforcement for massive structural elements. In the absence of such a digital post-processing tool, design office engineers make intensive use of plate modelling, for which automatic post-processing is available. Plate models in massive areas, on the other hand, produce conservative results. In addition, the theoretical foundations of automatic post-processing tools for reinforcement are those of reinforced concrete beam sections. As long as there is no suitable alternative for automatic post-processing of plates, optimal modelling and a significant improvement of the constructability of massive areas cannot be expected. A method called strut-and-tie is commonly used in civil engineering, but the result itself remains very subjective to the calculation engineer. The tool developed will facilitate the work of supporting the engineers in their choice of structure. The method implemented consists of defining a ground-structure built on the basis of the main constraints resulting from an elastic analysis of the structure and then to start an optimization of this structure according to the fully stressed design method. The first results allow to obtain a coherent return in the first network of connecting struts and ties, compared to the cases encountered in the literature. The evolution of the tool will then make it possible to adapt the obtained latticework in relation to the cracking states resulting from the loads applied during the life of the structure, cyclic or dynamic loads. In addition, with the constructability constraint, a final result of reinforcement with an orthogonal arrangement with a regulated spacing will be implemented in the tool.

Keywords: strut and tie, optimization, reinforcement, massive structure

Procedia PDF Downloads 141
16062 Online Prediction of Nonlinear Signal Processing Problems Based Kernel Adaptive Filtering

Authors: Hamza Nejib, Okba Taouali

Abstract:

This paper presents two of the most knowing kernel adaptive filtering (KAF) approaches, the kernel least mean squares and the kernel recursive least squares, in order to predict a new output of nonlinear signal processing. Both of these methods implement a nonlinear transfer function using kernel methods in a particular space named reproducing kernel Hilbert space (RKHS) where the model is a linear combination of kernel functions applied to transform the observed data from the input space to a high dimensional feature space of vectors, this idea known as the kernel trick. Then KAF is the developing filters in RKHS. We use two nonlinear signal processing problems, Mackey Glass chaotic time series prediction and nonlinear channel equalization to figure the performance of the approaches presented and finally to result which of them is the adapted one.

Keywords: online prediction, KAF, signal processing, RKHS, Kernel methods, KRLS, KLMS

Procedia PDF Downloads 399
16061 Implant Guided Surgery and Immediate Loading

Authors: Omid Tavakol, Mahnaz Gholami

Abstract:

Introduction : In this oral presentation the main goal is discussing immediate loading in dental implants , from treatment planning and surgical guide designing to delivery , follow up and occlusal consideration . Methods and materials : first of all systematic reviews about immediate loading will be considered . besides , a comparison will be made between immediate loading and conventional loading in terms of success rate and complications . After that different methods , prosthetic options and materials best used in immediate loading will be explained. Particularly multi unit abutments and their mechanism of function will be explained .Digital impressions and designing the temporaries is the next topic we are to explicate .Next issue is the differences between single unit , multiple unit and full arch implantation in immediate loading .Following we are going to describe methods for tissue engineering and papilla formation after extraction . Last slides are about a full mouth rehabilitation via immediate loading technique from surgical designing to follow up .At the end we would talk about potential complications , how to prevent from occurrence and what to do if we face up with .

Keywords: guided surgery, digital implantology, immediate loading, digital dentistry

Procedia PDF Downloads 44