Search results for: computational methods
15921 Evaluation of Iranian Standard for Assessment of Liquefaction Potential of Cohesionless Soils Based on SPT
Authors: Reza Ziaie Moayad, Azam Kouhpeyma
Abstract:
In-situ testing is preferred to evaluate the liquefaction potential in cohesionless soils due to high disturbance during sampling. Although new in-situ methods with high accuracy have been developed, standard penetration test, the simplest and the oldest in-situ test, is still used due to the profusion of the recorded data. This paper reviews the Iranian standard of evaluating liquefaction potential in soils (codes 525) and compares the liquefaction assessment methods based on SPT results on cohesionless soil in this standard with the international standards. To this, methods for assessing liquefaction potential which are presented by Cetin et al. (2004), Boulanger and Idriss (2014) are compared with what is presented in standard 525. It is found that although the procedure used in Iranian standard of evaluating the potential of liquefaction has not been updated according to the new findings, it is a conservative procedure.Keywords: cohesionless soil, liquefaction, SPT, standard 525
Procedia PDF Downloads 17315920 Application of Vector Representation for Revealing the Richness of Meaning of Facial Expressions
Authors: Carmel Sofer, Dan Vilenchik, Ron Dotsch, Galia Avidan
Abstract:
Studies investigating emotional facial expressions typically reveal consensus among observes regarding the meaning of basic expressions, whose number ranges between 6 to 15 emotional states. Given this limited number of discrete expressions, how is it that the human vocabulary of emotional states is so rich? The present study argues that perceivers use sequences of these discrete expressions as the basis for a much richer vocabulary of emotional states. Such mechanisms, in which a relatively small number of basic components is expanded to a much larger number of possible combinations of meanings, exist in other human communications modalities, such as spoken language and music. In these modalities, letters and notes, which serve as basic components of spoken language and music respectively, are temporally linked, resulting in the richness of expressions. In the current study, in each trial participants were presented with sequences of two images containing facial expression in different combinations sampled out of the eight static basic expressions (total 64; 8X8). In each trial, using single word participants were required to judge the 'state of mind' portrayed by the person whose face was presented. Utilizing word embedding methods (Global Vectors for Word Representation), employed in the field of Natural Language Processing, and relying on machine learning computational methods, it was found that the perceived meanings of the sequences of facial expressions were a weighted average of the single expressions comprising them, resulting in 22 new emotional states, in addition to the eight, classic basic expressions. An interaction between the first and the second expression in each sequence indicated that every single facial expression modulated the effect of the other facial expression thus leading to a different interpretation ascribed to the sequence as a whole. These findings suggest that the vocabulary of emotional states conveyed by facial expressions is not restricted to the (small) number of discrete facial expressions. Rather, the vocabulary is rich, as it results from combinations of these expressions. In addition, present research suggests that using word embedding in social perception studies, can be a powerful, accurate and efficient tool, to capture explicit and implicit perceptions and intentions. Acknowledgment: The study was supported by a grant from the Ministry of Defense in Israel to GA and CS. CS is also supported by the ABC initiative in Ben-Gurion University of the Negev.Keywords: Glove, face perception, facial expression perception. , facial expression production, machine learning, word embedding, word2vec
Procedia PDF Downloads 17615919 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models
Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble
Abstract:
Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate
Procedia PDF Downloads 21715918 A Mixed Finite Element Formulation for Functionally Graded Micro-Beam Resting on Two-Parameter Elastic Foundation
Authors: Cagri Mollamahmutoglu, Aykut Levent, Ali Mercan
Abstract:
Micro-beams are one of the most common components of Nano-Electromechanical Systems (NEMS) and Micro Electromechanical Systems (MEMS). For this reason, static bending, buckling, and free vibration analysis of micro-beams have been the subject of many studies. In addition, micro-beams restrained with elastic type foundations have been of particular interest. In the analysis of microstructures, closed-form solutions are proposed when available, but most of the time solutions are based on numerical methods due to the complex nature of the resulting differential equations. Thus, a robust and efficient solution method has great importance. In this study, a mixed finite element formulation is obtained for a functionally graded Timoshenko micro-beam resting on two-parameter elastic foundation. In the formulation modified couple stress theory is utilized for the micro-scale effects. The equation of motion and boundary conditions are derived according to Hamilton’s principle. A functional, derived through a scientific procedure based on Gateaux Differential, is proposed for the bending and buckling analysis which is equivalent to the governing equations and boundary conditions. Most important advantage of the formulation is that the mixed finite element formulation allows usage of C₀ type continuous shape functions. Thus shear-locking is avoided in a built-in manner. Also, element matrices are sparsely populated and can be easily calculated with closed-form integration. In this framework results concerning the effects of micro-scale length parameter, power-law parameter, aspect ratio and coefficients of partially or fully continuous elastic foundation over the static bending, buckling, and free vibration response of FG-micro-beam under various boundary conditions are presented and compared with existing literature. Performance characteristics of the presented formulation were evaluated concerning other numerical methods such as generalized differential quadrature method (GDQM). It is found that with less computational burden similar convergence characteristics were obtained. Moreover, formulation also includes a direct calculation of the micro-scale related contributions to the structural response as well.Keywords: micro-beam, functionally graded materials, two-paramater elastic foundation, mixed finite element method
Procedia PDF Downloads 16315917 The Study of Rapid Entire Body Assessment and Quick Exposure Check Correlation in an Engine Oil Company
Authors: Mohammadreza Ashouria, Majid Motamedzadeb
Abstract:
Rapid Entire Body Assessment (REBA) and Quick Exposure Check (QEC) are two general methods to assess the risk factors of work-related musculoskeletal disorders (WMSDs). This study aimed to compare ergonomic risk assessment outputs from QEC and REBA in terms of agreement in distribution of postural loading scores based on analysis of working postures. This cross-sectional study was conducted in an engine oil company in which 40 jobs were studied. A trained occupational health practitioner observed all jobs. Job information was collected to ensure the completion of ergonomic risk assessment tools, including QEC, and REBA. The result revealed that there was a significant correlation between final scores (r=0.731) and the action levels (r =0.893) of two applied methods. Comparison between the action levels and final scores of two methods showed that there was no significant difference among working departments. Most of the studied postures acquired low and moderate risk level in QEC assessment (low risk=20%, moderate risk=50% and High risk=30%) and in REBA assessment (low risk=15%, moderate risk=60% and high risk=25%).There is a significant correlation between two methods. They have a strong correlation in identifying risky jobs and determining the potential risk for incidence of WMSDs. Therefore, there is a possibility for researchers to apply interchangeably both methods, for postural risk assessment in appropriate working environments.Keywords: observational method, QEC, REBA, musculoskeletal disorders
Procedia PDF Downloads 36115916 High Unmet Need and Factors Associated with Utilization of Contraceptive Methods among Women from the Digo Community of Kwale, Kenya
Authors: Mochache Vernon, Mwakusema Omar, Lakhani Amyn, El Busaidy Hajara, Temmerman Marleen, Gichangi Peter
Abstract:
Background: Utilization of contraceptive methods has been associated with improved maternal and child health (MCH) outcomes. Unfortunately, there has been sub-optimal uptake of contraceptive services in the developing world despite significant resources being dedicated accordingly. It is imperative to granulate factors that could influence uptake and utilization of contraception. Methodology: Between March and December 2015, we conducted a mixed-methods cross-sectional study among women of reproductive age (18-45 years) from a pre-dominantly rural coastal Kenyan community. Qualitative approaches involved focus group discussions as well as a series of key-informant interviews. We also administered a sexual and reproductive health survey questionnaire at the household level. Results: We interviewed 745 women from 15 villages in Kwale County. The median (interquartile range, IQR) age was 29 (23-37) while 76% reported being currently in a marital union. Eighty-seven percent and 85% of respondents reported ever attending school and ever giving birth, respectively. Respondents who had ever attended school were more than twice as likely to be using contraceptive methods [Odds Ratio, OR = 2.1, 95% confidence interval, CI: 1.4-3.4, P = 0.001] while those who had ever given birth were five times as likely to be using these methods [OR = 5.0, 95% CI: 1.7-15.0, P = 0.004]. The odds were similarly high among women who reported attending antenatal care (ANC) [OR = 4.0, 95% CI: 1.1-14.8, P = 0.04] as well as those who expressly stated that they did not want any more children or wanted to wait longer before getting another child [OR = 6.7, 95% CI: 3.3-13.8, P<0.0001]. Interviewees reported deferring to the ‘wisdom’ of an older maternal figure in the decision-making process. Conclusions: Uptake and utilization of contraceptive methods among Digo women from Kwale, Kenya is positively associated with demand-side factors including educational attainment, previous birth experience, ANC attendance and a negative future fertility desire. Interventions to improve contraceptive services should focus on engaging dominant maternal figures in the community.Keywords: unmet need, utilization of contraceptive methods, women, Digo community
Procedia PDF Downloads 18315915 Aeroacoustics Investigations of Unsteady 3D Airfoil for Different Angle Using Computational Fluid Dynamics Software
Authors: Haydar Kepekçi, Baha Zafer, Hasan Rıza Güven
Abstract:
Noise disturbance is one of the major factors considered in the fast development of aircraft technology. This paper reviews the flow field, which is examined on the 2D NACA0015 and 3D NACA0012 blade profile using SST k-ω turbulence model to compute the unsteady flow field. We inserted the time-dependent flow area variables in Ffowcs-Williams and Hawkings (FW-H) equations as an input and Sound Pressure Level (SPL) values will be computed for different angles of attack (AoA) from the microphone which is positioned in the computational domain to investigate effect of augmentation of unsteady 2D and 3D airfoil region noise level. The computed results will be compared with experimental data which are available in the open literature. As results; one of the calculated Cp is slightly lower than the experimental value. This difference could be due to the higher Reynolds number of the experimental data. The ANSYS Fluent software was used in this study. Fluent includes well-validated physical modeling capabilities to deliver fast, accurate results across the widest range of CFD and multiphysics applications. This paper includes a study which is on external flow over an airfoil. The case of 2D NACA0015 has approximately 7 million elements and solves compressible fluid flow with heat transfer using the SST turbulence model. The other case of 3D NACA0012 has approximately 3 million elements.Keywords: 3D blade profile, noise disturbance, aeroacoustics, Ffowcs-Williams and Hawkings (FW-H) equations, k-ω-SST turbulence model
Procedia PDF Downloads 21315914 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach
Authors: Rajvir Kaur, Jeewani Anupama Ginige
Abstract:
With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.Keywords: artificial neural networks, breast cancer, classifiers, cervical cancer, f-score, machine learning, precision, recall
Procedia PDF Downloads 27815913 Direct Displacement-Based Design Procedure for Performance-Based Seismic Design of Structures
Authors: Haleh Hamidpour
Abstract:
Since the seismic damageability of structures is controlled by the inelastic deformation capacities of structural elements, seismic design of structure based on force analogy methods is not appropriate. In recent year, the basic approach of design codes have been changed from force-based approach to displacement-based. In this regard, a Direct Displacement-Based Design (DDBD) and a Performance-Based Plastic Design (PBPD) method are proposed. In this study, the efficiency of these two methods on seismic performance of structures is evaluated through a sample 12-story reinforced concrete moment frame. The building is designed separately based on the DDBD and the PBPD methods. Once again the structure is designed by the traditional force analogy method according to the FEMA P695 regulation. Different design method results in different structural elements. Seismic performance of these three structures is evaluated through nonlinear static and nonlinear dynamic analysis. The results show that the displacement-based design methods accommodate the intended performance objectives better than the traditional force analogy method.Keywords: direct performance-based design, ductility demands, inelastic seismic performance, yield mechanism
Procedia PDF Downloads 33415912 Corrosion of Concrete Reinforcing Steel Bars Tested and Compared Between Various Protection Methods
Authors: P. van Tonder, U. Bagdadi, B. M. D. Lario, Z. Masina, T. R. Motshwari
Abstract:
This paper analyses how concrete reinforcing steel bars corrode and how it can be minimised through the use of various protection methods against corrosion, such as metal-based paint, alloying, cathodic protection and electroplating. Samples of carbon steel bars were protected, using these four methods. Tests performed on the samples included durability, electrical resistivity and bond strength. Durability results indicated relatively low corrosion rates for alloying, cathodic protection, electroplating and metal-based paint. The resistivity results indicate all samples experienced a downward trend, despite erratic fluctuations in the data, indicating an inverse relationship between electrical resistivity and corrosion rate. The results indicated lowered bond strengths when the reinforced concrete was cured in seawater compared to being cured in normal water. It also showed that higher design compressive strengths lead to higher bond strengths which can be used to compensate for the loss of bond strength due to corrosion in a real-world application. In terms of implications, all protection methods have the potential to be effective at resisting corrosion in real-world applications, especially the alloying, cathodic protection and electroplating methods. The metal-based paint underperformed by comparison, most likely due to the nature of paint in general which can fade and chip away, revealing the steel samples and exposing them to corrosion. For alloying, stainless steel is the suggested material of choice, where Y-bars are highly recommended as smooth bars have a much-lowered bond strength. Cathodic protection performed the best of all in protecting the sample from corrosion, however, its real-world application would require significant evaluation into the feasibility of such a method.Keywords: protection methods, corrosion, concrete, reinforcing steel bars
Procedia PDF Downloads 17415911 Mentor and Mentee Based Learning
Authors: Erhan Eroğlu
Abstract:
This paper presents a new method called Mentor and Mentee Based Learning. This new method is becoming more and more common especially at workplaces. This study is significant as it clearly underlines how it works well. Education has always aimed at equipping people with the necessary knowledge and information. For many decades it went on teachers’ talk and chalk methods. In the second half of the nineteenth century educators felt the need for some changes in delivery systems. Some new terms like self- discovery, learner engagement, student centered learning, hands on learning have become more and more popular for such a long time. However, some educators believe that there is much room for better learning methods in many fields as they think the learners still cannot fulfill their potential capacities. Thus, new systems and methods are still being developed and applied at education centers and work places. One of the latest methods is assigning some mentors for the newly recruited employees and training them within a mentor and mentee program which allows both parties to see their strengths and weaknesses and the areas which can be improved. This paper aims at finding out the perceptions of the mentors and mentees on the programs they are offered at their workplaces and suggests some betterment alternatives. The study has been conducted via a qualitative method whereby some interviews have been done with both mentors and mentees separately and together. Results show that it is a great way to train inexperienced one and also to refresh the older ones. Some points to be improved have also been underlined. The paper shows that education is not a one way path to follow.Keywords: learning, mentor, mentee, training
Procedia PDF Downloads 22815910 Review of Research on Effectiveness Evaluation of Technology Innovation Policy
Authors: Xue Wang, Li-Wei Fan
Abstract:
The technology innovation has become the driving force of social and economic development and transformation. The guidance and support of public policies is an important condition to promote the realization of technology innovation goals. Policy effectiveness evaluation is instructive in policy learning and adjustment. This paper reviews existing studies and systematically evaluates the effectiveness of policy-driven technological innovation. We used 167 articles from WOS and CNKI databases as samples to clarify the measurement of technological innovation indicators and analyze the classification and application of policy evaluation methods. In general, technology innovation input and technological output are the two main aspects of technological innovation index design, among which technological patents are the focus of research, the number of patents reflects the scale of technological innovation, and the quality of patents reflects the value of innovation from multiple aspects. As for policy evaluation methods, statistical analysis methods are applied to the formulation, selection and evaluation of the after-effect of policies to analyze the effect of policy implementation qualitatively and quantitatively. The bibliometric methods are mainly based on the public policy texts, discriminating the inter-government relationship and the multi-dimensional value of the policy. Decision analysis focuses on the establishment and measurement of the comprehensive evaluation index system of public policy. The economic analysis methods focus on the performance and output of technological innovation to test the policy effect. Finally, this paper puts forward the prospect of the future research direction.Keywords: technology innovation, index, policy effectiveness, evaluation of policy, bibliometric analysis
Procedia PDF Downloads 7215909 On the Solution of Fractional-Order Dynamical Systems Endowed with Block Hybrid Methods
Authors: Kizito Ugochukwu Nwajeri
Abstract:
This paper presents a distinct approach to solving fractional dynamical systems using hybrid block methods (HBMs). Fractional calculus extends the concept of derivatives and integrals to non-integer orders and finds increasing application in fields such as physics, engineering, and finance. However, traditional numerical techniques often struggle to accurately capture the complex behaviors exhibited by these systems. To address this challenge, we develop HBMs that integrate single-step and multi-step methods, enabling the simultaneous computation of multiple solution points while maintaining high accuracy. Our approach employs polynomial interpolation and collocation techniques to derive a system of equations that effectively models the dynamics of fractional systems. We also directly incorporate boundary and initial conditions into the formulation, enhancing the stability and convergence properties of the numerical solution. An adaptive step-size mechanism is introduced to optimize performance based on the local behavior of the solution. Extensive numerical simulations are conducted to evaluate the proposed methods, demonstrating significant improvements in accuracy and efficiency compared to traditional numerical approaches. The results indicate that our hybrid block methods are robust and versatile, making them suitable for a wide range of applications involving fractional dynamical systems. This work contributes to the existing literature by providing an effective numerical framework for analyzing complex behaviors in fractional systems, thereby opening new avenues for research and practical implementation across various disciplines.Keywords: fractional calculus, numerical simulation, stability and convergence, Adaptive step-size mechanism, collocation methods
Procedia PDF Downloads 4715908 Searching the Relationship among Components that Contribute to Interactive Plight and Educational Execution
Authors: Shri Krishna Mishra
Abstract:
In an educational context, technology can prompt interactive plight only when it is used in conjunction with interactive plight methods. This study, therefore, examines the relationships among components that contribute to higher levels of interactive plight and execution, such as interactive Plight methods, technology, intrinsic motivation and deep learning. 526 students participated in this study. With structural equation modelling, the authors test the conceptual model and identify satisfactory model fit. The results indicate that interactive Plight methods, technology and intrinsic motivation have significant relationship with interactive Plight; deep learning mediates the relationships of the other variables with Execution.Keywords: searching the relationship among components, contribute to interactive plight, educational execution, intrinsic motivation
Procedia PDF Downloads 45415907 Effect of Bi-Dispersity on Particle Clustering in Sedimentation
Authors: Ali Abbas Zaidi
Abstract:
In free settling or sedimentation, particles form clusters at high Reynolds number and dilute suspensions. It is due to the entrapment of particles in the wakes of upstream particles. In this paper, the effect of bi-dispersity of settling particles on particle clustering is investigated using particle-resolved direct numerical simulation. Immersed boundary method is used for particle fluid interactions and discrete element method is used for particle-particle interactions. The solid volume fraction used in the simulation is 1% and the Reynolds number based on Sauter mean diameter is 350. Both solid volume fraction and Reynolds number lie in the clustering regime of sedimentation. In simulations, the particle diameter ratio (i.e. diameter of larger particle to smaller particle (d₁/d₂)) is varied from 2:1, 3:1 and 4:1. For each case of particle diameter ratio, solid volume fraction for each particle size (φ₁/φ₂) is varied from 1:1, 1:2 and 2:1. For comparison, simulations are also performed for monodisperse particles. For studying particles clustering, radial distribution function and instantaneous location of particles in the computational domain are studied. It is observed that the degree of particle clustering decreases with the increase in the bi-dispersity of settling particles. The smallest degree of particle clustering or dispersion of particles is observed for particles with d₁/d₂ equal to 4:1 and φ₁/φ₂ equal to 1:2. Simulations showed that the reduction in particle clustering by increasing bi-dispersity is due to the difference in settling velocity of particles. Particles with larger size settle faster and knockout the smaller particles from clustered regions of particles in the computational domain.Keywords: dispersion in bi-disperse settling particles, particle microstructures in bi-disperse suspensions, particle resolved direct numerical simulations, settling of bi-disperse particles
Procedia PDF Downloads 20815906 A Study of Quality Assurance and Unit Verification Methods in Safety Critical Environment
Authors: Miklos Taliga
Abstract:
In the present case study we examined the development and testing methods of systems that contain safety-critical elements in different industrial fields. Consequentially, we observed the classical object-oriented development and testing environment, as both medical technology and automobile industry approaches the development of safety critical elements that way. Subsequently, we examined model-based development. We introduce the quality parameters that define development and testing. While taking modern agile methodology (scrum) into consideration, we examined whether and to what extent the methodologies we found fit into this environment.Keywords: safety-critical elements, quality managent, unit verification, model base testing, agile methods, scrum, metamodel, object-oriented programming, field specific modelling, sprint, user story, UML Standard
Procedia PDF Downloads 58515905 Cooperative Coevolution for Neuro-Evolution of Feed Forward Networks for Time Series Prediction Using Hidden Neuron Connections
Authors: Ravneil Nand
Abstract:
Cooperative coevolution uses problem decomposition methods to solve a larger problem. The problem decomposition deals with breaking down the larger problem into a number of smaller sub-problems depending on their method. Different problem decomposition methods have their own strengths and limitations depending on the neural network used and application problem. In this paper we are introducing a new problem decomposition method known as Hidden-Neuron Level Decomposition (HNL). The HNL method is competing with established problem decomposition method in time series prediction. The results show that the proposed approach has improved the results in some benchmark data sets when compared to the standalone method and has competitive results when compared to methods from literature.Keywords: cooperative coevaluation, feed forward network, problem decomposition, neuron, synapse
Procedia PDF Downloads 33715904 Mariculture Trials of the Philippine Blue Sponge Xestospongia sp.
Authors: Clairecynth Yu, Geminne Manzano
Abstract:
The mariculture potential of the Philippine blue sponge, Xestospongia sp. was assessed through the pilot sponge culture in the open-sea at two different biogeographic regions in the Philippines. Thirty explants were randomly allocated for the Puerto Galera, Oriental Mindoro culture setup and the other nine were transported to Lucero, Bolinao, Pangasinan. Two different sponge culture methods of the sponge explants- the lantern and the wall method, were employed to assess the production of the Renieramycin M. Both methods have shown to be effective in growing the sponge explants and that the Thin Layer Chromatography (TLC) results have shown that Renieramycin M is present on the sponges. The effect of partial harvesting in the growth and survival rates of the blue sponge in the Puerto Galera setup was also determined. Results showed that a higher growth rate was observed on the partially harvested explants on both culture methods as compared to the unharvested explants.Keywords: chemical ecology, porifera, sponge, Xestospongia sp.
Procedia PDF Downloads 27415903 Hydrodynamics Study on Planing Hull with and without Step Using Numerical Solution
Authors: Koe Han Beng, Khoo Boo Cheong
Abstract:
The rising interest of stepped hull design has been led by the demand of more efficient high-speed boat. At the same time, the need of accurate prediction method for stepped planing hull is getting more important. By understanding the flow at high Froude number is the key in designing a practical step hull, the study surrounding stepped hull has been done mainly in the towing tank which is time-consuming and costly for initial design phase. Here the feasibility of predicting hydrodynamics of high-speed planing hull both with and without step using computational fluid dynamics (CFD) with the volume of fluid (VOF) methodology is studied in this work. First the flow around the prismatic body is analyzed, the force generated and its center of pressure are compared with available experimental and empirical data from the literature. The wake behind the transom on the keel line as well as the quarter beam buttock line are then compared with the available data, this is important since the afterbody flow of stepped hull is subjected from the wake of the forebody. Finally the calm water performance prediction of a conventional planing hull and its stepped version is then analyzed. Overset mesh methodology is employed in solving the dynamic equilibrium of the hull. The resistance, trim, and heave are then compared with the experimental data. The resistance is found to be predicted well and the dynamic equilibrium solved by the numerical method is deemed to be acceptable. This means that computational fluid dynamics will be very useful in further study on the complex flow around stepped hull and its potential usage in the design phase.Keywords: planing hulls, stepped hulls, wake shape, numerical simulation, hydrodynamics
Procedia PDF Downloads 28315902 Framework for Integrating Big Data and Thick Data: Understanding Customers Better
Authors: Nikita Valluri, Vatcharaporn Esichaikul
Abstract:
With the popularity of data-driven decision making on the rise, this study focuses on providing an alternative outlook towards the process of decision-making. Combining quantitative and qualitative methods rooted in the social sciences, an integrated framework is presented with a focus on delivering a much more robust and efficient approach towards the concept of data-driven decision-making with respect to not only Big data but also 'Thick data', a new form of qualitative data. In support of this, an example from the retail sector has been illustrated where the framework is put into action to yield insights and leverage business intelligence. An interpretive approach to analyze findings from both kinds of quantitative and qualitative data has been used to glean insights. Using traditional Point-of-sale data as well as an understanding of customer psychographics and preferences, techniques of data mining along with qualitative methods (such as grounded theory, ethnomethodology, etc.) are applied. This study’s final goal is to establish the framework as a basis for providing a holistic solution encompassing both the Big and Thick aspects of any business need. The proposed framework is a modified enhancement in lieu of traditional data-driven decision-making approach, which is mainly dependent on quantitative data for decision-making.Keywords: big data, customer behavior, customer experience, data mining, qualitative methods, quantitative methods, thick data
Procedia PDF Downloads 16315901 Comparison of Different Extraction Methods for the Determination of Polyphenols
Authors: Senem Suna
Abstract:
Extraction of bioactive compounds from several food/food products comes as an important topic and new trend related with health promoting effects. As a result of the increasing interest in natural foods, different methods are used for the acquisition of these components especially polyphenols. However, special attention has to be paid to the selection of proper techniques or several processing technologies (supercritical fluid extraction, microwave-assisted extraction, ultrasound-assisted extraction, powdered extracts production) for each kind of food to get maximum benefit as well as the obtainment of phenolic compounds. In order to meet consumer’s demand for healthy food and the management of quality and safety requirements, advanced research and development are needed. In this review, advantages, and disadvantages of different extraction methods, their opportunities to be used in food industry and the effects of polyphenols are mentioned in details. Consequently, with the evaluation of the results of several studies, the selection of the most suitable food specific method was aimed.Keywords: bioactives, extraction, powdered extracts, supercritical fluid extraction
Procedia PDF Downloads 23915900 Development of a Reduced Multicomponent Jet Fuel Surrogate for Computational Fluid Dynamics Application
Authors: Muhammad Zaman Shakir, Mingfa Yao, Zohaib Iqbal
Abstract:
This study proposed four Jet fuel surrogate (S1, S2 S3, and 4) with careful selection of seven large hydrocarbon fuel components, ranging from C₉-C₁₆ of higher molecular weight and higher boiling point, adapting the standard molecular distribution size of the actual jet fuel. The surrogate was composed of seven components, including n-propyl cyclohexane (C₉H₁₈), n- propylbenzene (C₉H₁₂), n-undecane (C₁₁H₂₄), n- dodecane (C₁₂H₂₆), n-tetradecane (C₁₄H₃₀), n-hexadecane (C₁₆H₃₄) and iso-cetane (iC₁₆H₃₄). The skeletal jet fuel surrogate reaction mechanism was developed by two approaches, firstly based on a decoupling methodology by describing the C₄ -C₁₆ skeletal mechanism for the oxidation of heavy hydrocarbons and a detailed H₂ /CO/C₁ mechanism for prediction of oxidation of small hydrocarbons. The combined skeletal jet fuel surrogate mechanism was compressed into 128 species, and 355 reactions and thereby can be used in computational fluid dynamics (CFD) simulation. The extensive validation was performed for individual single-component including ignition delay time, species concentrations profile and laminar flame speed based on various fundamental experiments under wide operating conditions, and for their blended mixture, among all the surrogate, S1 has been extensively validated against the experimental data in a shock tube, rapid compression machine, jet-stirred reactor, counterflow flame, and premixed laminar flame over wide ranges of temperature (700-1700 K), pressure (8-50 atm), and equivalence ratio (0.5-2.0) to capture the properties target fuel Jet-A, while the rest of three surrogate S2, S3 and S4 has been validated for Shock Tube ignition delay time only to capture the ignition characteristic of target fuel S-8 & GTL, IPK and RP-3 respectively. Based on the newly proposed HyChem model, another four surrogate with similar components and composition, was developed and parallel validations data was used as followed for previously developed surrogate but at high-temperature condition only. After testing the mechanism prediction performance of surrogates developed by the decoupling methodology, the comparison was done with the results of surrogates developed by the HyChem model. It was observed that all of four proposed surrogates in this study showed good agreement with the experimental measurements and the study comes to this conclusion that like the decoupling methodology HyChem model also has a great potential for the development of oxidation mechanism for heavy alkanes because of applicability, simplicity, and compactness.Keywords: computational fluid dynamics, decoupling methodology Hychem, jet fuel, surrogate, skeletal mechanism
Procedia PDF Downloads 13715899 Modelling and Simulation Efforts in Scale-Up and Characterization of Semi-Solid Dosage Forms
Authors: Saurav S. Rath, Birendra K. David
Abstract:
Generic pharmaceutical industry has to operate in strict timelines of product development and scale-up from lab to plant. Hence, detailed product & process understanding and implementation of appropriate mechanistic modelling and Quality-by-design (QbD) approaches are imperative in the product life cycle. This work provides example cases of such efforts in topical dosage products. Topical products are typically in the form of emulsions, gels, thick suspensions or even simple solutions. The efficacy of such products is determined by characteristics like rheology and morphology. Defining, and scaling up the right manufacturing process with a given set of ingredients, to achieve the right product characteristics presents as a challenge to the process engineer. For example, the non-Newtonian rheology varies not only with CPPs and CMAs but also is an implicit function of globule size (CQA). Hence, this calls for various mechanistic models, to help predict the product behaviour. This paper focusses on such models obtained from computational fluid dynamics (CFD) coupled with population balance modelling (PBM) and constitutive models (like shear, energy density). In a special case of the use of high shear homogenisers (HSHs) for the manufacture of thick emulsions/gels, this work presents some findings on (i) scale-up algorithm for HSH using shear strain, a novel scale-up parameter for estimating mixing parameters, (ii) non-linear relationship between viscosity and shear imparted into the system, (iii) effect of hold time on rheology of product. Specific examples of how this approach enabled scale-up across 1L, 10L, 200L, 500L and 1000L scales will be discussed.Keywords: computational fluid dynamics, morphology, quality-by-design, rheology
Procedia PDF Downloads 26915898 Effects of Storage Methods on Proximate Compositions of African Yam Bean (Sphenostylis stenocarpa) Seeds
Authors: Iyabode A. Kehinde, Temitope A. Oyedele, Clement G. Afolabi
Abstract:
One of the limitations of African yam bean (AYB) (Sphenostylis sternocarpa) is poor storage ability due to the adverse effect of seed-borne fungi. This study was conducted to examine the effects of storage methods on the nutritive composition of AYB seeds stored in three types of storage materials viz; Jute bags, Polypropylene bags, and Plastic Bowls. Freshly harvested seeds of AYB seeds were stored in all the storage materials for 6 months using 2 × 3 factorial (2 AYB cultivars and 3 storage methods) in 3 replicates. The proximate analysis of the stored AYB seeds was carried out at 3 and 6 months after storage using standard methods. The temperature and relative humidity of the storeroom was recorded monthly with Kestrel pocket weather tracker 4000. Seeds stored in jute bags gave the best values for crude protein (24.87%), ash (5.69%) and fat content (6.64%) but recorded least values for crude fibre (2.55%), carbohydrate (50.86%) and moisture content (12.68%) at the 6th month of storage. The temperature of the storeroom decreased from 32.9ºC - 28.3ºC, while the relative humidity increased from 78% - 86%. Decreased incidence of field fungi namely: Rhizopus oryzae, Aspergillus flavus, Geotricum candidum, Aspergillus fumigatus and Mucor meihei was accompanied by the increase in storage fungi viz: Apergillus niger, Mucor hiemalis, Penicillium espansum and Penicillium atrovenetum with prolonged storage. The study showed that of the three storage materials jute bag was more effective at preserving AYB seeds.Keywords: storage methods, proximate composition, African Yam Bean, fungi
Procedia PDF Downloads 13515897 An Analytical Approach of Computational Complexity for the Method of Multifluid Modelling
Authors: A. K. Borah, A. K. Singh
Abstract:
In this paper we deal building blocks of the computer simulation of the multiphase flows. Whole simulation procedure can be viewed as two super procedures; The implementation of VOF method and the solution of Navier Stoke’s Equation. Moreover, a sequential code for a Navier Stoke’s solver has been studied.Keywords: Bi-conjugate gradient stabilized (Bi-CGSTAB), ILUT function, krylov subspace, multifluid flows preconditioner, simple algorithm
Procedia PDF Downloads 52815896 Numerical Investigation of Pressure Drop in Core Annular Horizontal Pipe Flow
Authors: John Abish, Bibin John
Abstract:
Liquid-liquid flow in horizontal pipe is investigated in order to reveal the flow patterns arising from the co-existed flow of oil and water. The main focus of the study is to identify the feasibility of reducing the pumping power requirements of petroleum transportation lines by having an annular flow of water around the thick oil core. This idea makes oil transportation cheaper and easier. The present study uses computational fluid dynamics techniques to model oil-water flows with liquids of similar density and varying viscosity. The simulation of the flow is conducted using commercial package Ansys Fluent. Flow domain modeling and grid generation accomplished through ICEM CFD. The horizontal pipe is modeled with two different inlets and meshed with O-Grid mesh. The standard k-ε turbulence scheme along with the volume of fluid (VOF) multiphase modeling method is used to simulate the oil-water flow. Transient flow simulations carried out for a total period of 30s showed significant reduction in pressure drop while employing core annular flow concept. This study also reveals the effect of viscosity ratio, mass flow rates of individual fluids and ration of superficial velocities on the pressure drop across the pipe length. Contours of velocity and volume fractions are employed along with pressure predictions to assess the effectiveness of this proposed concept quantitatively as well as qualitatively. The outcome of the present study is found to be very relevant for the petrochemical industries.Keywords: computational fluid dynamics, core-annular flows, frictional flow resistance, oil transportation, pressure drop
Procedia PDF Downloads 40715895 Using Learning Apps in the Classroom
Authors: Janet C. Read
Abstract:
UClan set collaboration with Lingokids to assess the Lingokids learning app's impact on learning outcomes in classrooms in the UK for children with ages ranging from 3 to 5 years. Data gathered during the controlled study with 69 children includes attitudinal data, engagement, and learning scores. Data shows that children enjoyment while learning was higher among those children using the game-based app compared to those children using other traditional methods. It’s worth pointing out that engagement when using the learning app was significantly higher than other traditional methods among older children. According to existing literature, there is a direct correlation between engagement, motivation, and learning. Therefore, this study provides relevant data points to conclude that Lingokids learning app serves its purpose of encouraging learning through playful and interactive content. That being said, we believe that learning outcomes should be assessed with a wider range of methods in further studies. Likewise, it would be beneficial to assess the level of usability and playability of the app in order to evaluate the learning app from other angles.Keywords: learning app, learning outcomes, rapid test activity, Smileyometer, early childhood education, innovative pedagogy
Procedia PDF Downloads 7215894 Feedback from Experiments on Managing Methods against Japanese Knotweed on a River Appendix of the RhôNe between 2015 and 2020
Authors: William Brasier, Nicolas Rabin, Celeste Joly
Abstract:
Japanese knotweed (Fallopia japonica) is very present on the banks of the Rhone, colonizing more and more areas along the river. The Compagnie Nationale du Rhone (C.N.R.), which manages the river, has experimented with several control techniques in recent years. Since 2015, 15 experimental plots have been monitored on the banks of a restored river appendix to measure the effect of three control methods: confinement by felt, repeated mowing and the planting of competing species and/or species with allelopathic power: Viburnum opulus, Rhamnus frangula, Sambucus ebulus and Juglans regia. Each year, the number of stems, the number of elderberry plants, the height of the plants and photographs were collected. After six years of monitoring, the results showed that the density of knotweed stems decreased by 50 to 90% on all plots. The control methods are sustainable and are gradually gaining in efficiency. The establishment of native plants coupled with regular manual maintenance can reduce the development of Japanese knotweed. Continued monitoring over the next few years will determine the kinetics of the total eradication (i.e. 0 stem/plot) of the Japanese knotweed by these methods.Keywords: fallopia japonica, interspecific plant competition , Rhone river, riparian trees
Procedia PDF Downloads 13215893 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends
Authors: Zheng Yuxun
Abstract:
This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis
Procedia PDF Downloads 5315892 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison
Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes
Abstract:
Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution
Procedia PDF Downloads 217