Search results for: large Eddy simulation
9135 Improving Comfort and Energy Mastery: Application of a Method Based on Indicators Morpho-Energetic
Authors: Khadidja Rahmani, Nahla Bouaziz
Abstract:
The climate change and the economic crisis, which are currently running, are the origin of the emergence of many issues and problems, which are related to the domain of energy and environment in à direct or indirect manner. Since the urban space is the core element and the key to solve the current problem, particular attention is given to it in this study. For this reason, we rented to the later a very particular attention; this is for the opportunities that it provides and that can be invested to attenuate a little this situation, which is disastrous and worried, especially in the face of the requirements of sustainable development. Indeed, the purpose of this work is to develop a method, which will allow us to guide designers towards projects with a certain degree of thermo-aeraulic comfort while requiring a minimum energy consumption. In this context, the architects, the urban planners and the engineers (energeticians) have to collaborate jointly to establish a method based on indicators for the improvement of the urban environmental quality (aeraulic-thermo comfort), correlated with a reduction in the energy demand of the entities that make up this environment, in areas with a sub-humid climate. In order to test the feasibility and to validate the method developed in this work, we carried out a series of simulations using computer-based simulation. This research allows us to evaluate the impact of the use of the indicators in the design of the urban sets, on the economic and ecological plan. Using this method, we prove that an urban design, which carefully considered energetically, can contribute significantly to the preservation of the environment and the reduction of the consumption of energy.Keywords: comfort, energy consumption, energy mastery, morpho-energetic indicators, simulation, sub-humid climate, urban sets
Procedia PDF Downloads 2739134 Nonlocal Beam Models for Free Vibration Analysis of Double-Walled Carbon Nanotubes with Various End Supports
Authors: Babak Safaei, Ahmad Ghanbari, Arash Rahmani
Abstract:
In the present study, the free vibration characteristics of double-walled carbon nanotubes (DWCNTs) are investigated. The small-scale effects are taken into account using the Eringen’s nonlocal elasticity theory. The nonlocal elasticity equations are implemented into the different classical beam theories namely as Euler-Bernoulli beam theory (EBT), Timoshenko beam theory (TBT), Reddy beam theory (RBT), and Levinson beam theory (LBT) to analyze the free vibrations of DWCNTs in which each wall of the nanotubes is considered as individual beam with van der Waals interaction forces. Generalized differential quadrature (GDQ) method is utilized to discretize the governing differential equations of each nonlocal beam model along with four commonly used boundary conditions. Then molecular dynamics (MD) simulation is performed for a series of armchair and zigzag DWCNTs with different aspect ratios and boundary conditions, the results of which are matched with those of nonlocal beam models to extract the appropriate values of the nonlocal parameter corresponding to each type of chirality, nonlocal beam model and boundary condition. It is found that the present nonlocal beam models with their proposed correct values of nonlocal parameter have good capability to predict the vibrational behavior of DWCNTs, especially for higher aspect ratios.Keywords: double-walled carbon nanotubes, nonlocal continuum elasticity, free vibrations, molecular dynamics simulation, generalized differential quadrature method
Procedia PDF Downloads 2939133 Measurement and Simulation of Axial Neutron Flux Distribution in Dry Tube of KAMINI Reactor
Authors: Manish Chand, Subhrojit Bagchi, R. Kumar
Abstract:
A new dry tube (DT) has been installed in the tank of KAMINI research reactor, Kalpakkam India. This tube will be used for neutron activation analysis of small to large samples and testing of neutron detectors. DT tube is 375 cm height and 7.5 cm in diameter, located 35 cm away from the core centre. The experimental thermal flux at various axial positions inside the tube has been measured by irradiating the flux monitor (¹⁹⁷Au) at 20kW reactor power. The measured activity of ¹⁹⁸Au and the thermal cross section of ¹⁹⁷Au (n,γ) ¹⁹⁸Au reaction were used for experimental thermal flux measurement. The flux inside the tube varies from 10⁹ to 10¹⁰ and maximum flux was (1.02 ± 0.023) x10¹⁰ n cm⁻²s⁻¹ at 36 cm from the bottom of the tube. The Au and Zr foils without and with cadmium cover of 1-mm thickness were irradiated at the maximum flux position in the DT to find out the irradiation specific input parameters like sub-cadmium to epithermal neutron flux ratio (f) and the epithermal neutron flux shape factor (α). The f value was 143 ± 5, indicates about 99.3% thermal neutron component and α value was -0.2886 ± 0.0125, indicates hard epithermal neutron spectrum due to insufficient moderation. The measured flux profile has been validated using theoretical model of KAMINI reactor through Monte Carlo N-Particle Code (MCNP). In MCNP, the complex geometry of the entire reactor is modelled in 3D, ensuring minimum approximations for all the components. Continuous energy cross-section data from ENDF-B/VII.1 as well as S (α, β) thermal neutron scattering functions are considered. The neutron flux has been estimated at the corresponding axial locations of the DT using mesh tally. The thermal flux obtained from the experiment shows good agreement with the theoretically predicted values by MCNP, it was within ± 10%. It can be concluded that this MCNP model can be utilized for calculating other important parameters like neutron spectra, dose rate, etc. and multi elemental analysis can be carried out by irradiating the sample at maximum flux position using measured f and α parameters by k₀-NAA standardization.Keywords: neutron flux, neutron activation analysis, neutron flux shape factor, MCNP, Monte Carlo N-Particle Code
Procedia PDF Downloads 1609132 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis
Authors: Kevin Potoczny, Katsuichiro Goda
Abstract:
The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility
Procedia PDF Downloads 739131 Towards Learning Query Expansion
Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier
Abstract:
The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.Keywords: supervised leaning, classification, query expansion, association rules
Procedia PDF Downloads 3229130 Research on University Campus Green Renovation Design Method
Authors: Abduxukur Zayit, Guo Rui Chen
Abstract:
Universities play important role for develop and distribute sustainable development ideas. This research based on the current situation of large and widely distributed university campuses in China. In view of the deterioration of campus performance, the aging of function and facilities, the large consumption of energy and resources, a logic of "problem-oriented-goal-oriented- At the level, taking the problem orientation as the focus,this paper analyzes the main influencing factors of the existing characteristics of the university campuses, establishes the digital assessment methods and clarifies the key points of the rennovation. Based on the goal orientation, this paper puts forward the existing university campus design principles, builds the green transformation-carding model and sets up the post-use evaluation model. In the end, with dual guidance as the constraint, we will formulate green design standards for campus greening, construct a greening enhancement measure for campus environment, and develop and promote a green campus after-use assessment platform. It provides useful research methods and research ideas for the reconstruction of the existing campus in China, especially the urban universities.Keywords: design method, existing university campus, green renovation, sustainable development
Procedia PDF Downloads 1269129 A Review of Antimicrobial Strategy for Cotton Textile
Abstract:
Cotton textile has large specific surfaces with good adhesion and water-storage properties which provide conditions for the growth and settlement of biological organisms. In addition, the soil, dust and solutes from sweat can also be the sources of nutrients for microorganisms [236]. Generally speaking, algae can grow on textiles under very moist conditions, providing nutrients for fungi and bacteria growth. Fungi cause multiple problems to textiles including discolouration, coloured stains and fibre damage. Bacteria can damage fibre and cause unpleasant odours with a slick and slimy feel. In addition, microbes can disrupt the manufacturing processes such as textile dyeing, printing and finishing operations through the reduction of viscosity, fermentation and mold formation. Therefore, a large demand exists for the anti-microbially finished textiles capable of avoiding or limiting microbial fibre degradation or bio fouling, bacterial incidence, odour generation and spreading or transfer of pathogens. In this review, the main strategy for cotton textile will be reviewed. In the beginning, the classification of bacteria and germs which are commonly found with cotton textiles will be introduced. The chemistry of antimicrobial finishing will be discussed. In addition, the types of antimicrobial treatment will be summarized. Finally, the application and evaluation of antimicrobial treatment on cotton textile will be discussed.Keywords: antimicrobial, cotton, textile, review
Procedia PDF Downloads 3639128 Effects of Using Alternative Energy Sources and Technologies to Reduce Energy Consumption and Expenditure of a Single Detached House
Authors: Gul Nihal Gugul, Merih Aydinalp-Koksal
Abstract:
In this study, hourly energy consumption model of a single detached house in Ankara, Turkey is developed using ESP-r building energy simulation software. Natural gas is used for space heating, cooking, and domestic water heating in this two story 4500 square feet four-bedroom home. Hourly electricity consumption of the home is monitored by an automated meter reading system, and daily natural gas consumption is recorded by the owners during 2013. Climate data of the region and building envelope data are used to develop the model. The heating energy consumption of the house that is estimated by the ESP-r model is then compared with the actual heating demand to determine the performance of the model. Scenarios are applied to the model to determine the amount of reduction in the total energy consumption of the house. The scenarios are using photovoltaic panels to generate electricity, ground source heat pumps for space heating and solar panels for domestic hot water generation. Alternative scenarios such as improving wall and roof insulations and window glazing are also applied. These scenarios are evaluated based on annual energy, associated CO2 emissions, and fuel expenditure savings. The pay-back periods for each scenario are also calculated to determine best alternative energy source or technology option for this home to reduce annual energy use and CO2 emission.Keywords: ESP-r, building energy simulation, residential energy saving, CO2 reduction
Procedia PDF Downloads 1969127 Enhanced Method of Conceptual Sizing of Aircraft Electro-Thermal De-Icing System
Authors: Ahmed Shinkafi, Craig Lawson
Abstract:
There is a great advancement towards the All-Electric Aircraft (AEA) technology. The AEA concept assumes that all aircraft systems will be integrated into one electrical power source in the future. The principle of the electro-thermal system is to transfer the energy required for anti/de-icing to the protected areas in electrical form. However, powering a large aircraft anti-icing system electrically could be quite excessive in cost and system weight. Hence, maximising the anti/de-icing efficiency of the electro-thermal system in order to minimise its power demand has become crucial to electro-thermal de-icing system sizing. In this work, an enhanced methodology has been developed for conceptual sizing of aircraft electro-thermal de-icing System. The work factored those critical terms overlooked in previous studies which were critical to de-icing energy consumption. A case study of a typical large aircraft wing de-icing was used to test and validate the model. The model was used to optimise the system performance by a trade-off between the de-icing peak power and system energy consumption. The optimum melting surface temperatures and energy flux predicted enabled the reduction in the power required for de-icing. The weight penalty associated with electro-thermal anti-icing/de-icing method could be eliminated using this method without under estimating the de-icing power requirement.Keywords: aircraft, de-icing system, electro-thermal, in-flight icing
Procedia PDF Downloads 5169126 Up-Scaling of Highly Transparent Quasi-Solid State Dye-Sensitized Solar Devices Composed of Nanocomposite Materials
Authors: Dimitra Sygkridou, Andreas Rapsomanikis, Elias Stathatos, Polycarpos Falaras, Evangelos Vitoratos
Abstract:
At the present work highly transparent strip type quasi-solid state dye-sensitized solar cells (DSSCs) were fabricated through inkjet printing using nanocomposite TiO2 inks as raw materials and tested under outdoor illumination conditions. The cells, which can be considered as the structural units of large area modules, were fully characterized electrically and electrochemically and after the evaluation of the received results a large area DSSC module was manufactured. The module design was a sandwich Z-interconnection where the working electrode is deposited on one conductive glass and the counter electrode on a second glass. Silver current collective fingers were printed on the conductive glasses to make the internal electrical connections and the adjacent cells were connected in series and finally insulated using a UV curing resin to protect them from the corrosive (I-/I3-) redox couple of the electrolyte. Finally, outdoor tests were carried out to the fabricated dye-sensitized solar module and its performance data were collected and assessed.Keywords: dye-sensitized solar devices, inkjet printing, quasi-solid state electrolyte, transparency, up-scaling
Procedia PDF Downloads 3369125 Sunspot Cycles: Illuminating Humanity's Mysteries
Authors: Aghamusa Azizov
Abstract:
This study investigates the correlation between solar activity and sentiment in news media coverage, using a large-scale dataset of solar activity since 1750 and over 15 million articles from "The New York Times" dating from 1851 onwards. Employing Pearson's correlation coefficient and multiple Natural Language Processing (NLP) tools—TextBlob, Vader, and DistillBERT—the research examines the extent to which fluctuations in solar phenomena are reflected in the sentiment of historical news narratives. The findings reveal that the correlation between solar activity and media sentiment is generally negligible, suggesting a weak influence of solar patterns on the portrayal of events in news media. Notably, a moderate positive correlation was observed between the sentiments derived from TextBlob and Vader, indicating consistency across NLP tools. The analysis provides insights into the historical impact of solar activity on human affairs and highlights the importance of using multiple analytical methods to understand complex relationships in large datasets. The study contributes to the broader understanding of how extraterrestrial factors may intersect with media-reported events and underlines the intricate nature of interdisciplinary research in the data science and historical domains.Keywords: solar activity correlation, media sentiment analysis, natural language processing, historical event patterns
Procedia PDF Downloads 779124 Controlled Growth of Charge Transfer Complex Nanowire by Physical Vapor Deposition Method Using Dielectrophoretic Force
Authors: Rabaya Basori, Arup K. Raychaudhuri
Abstract:
In recent years, a variety of semiconductor nanowires (NWs) has been synthesized and used as basic building blocks for the development of electronic and optoelectronic nanodevices. Dielectrophoresis (DEP) has been widely investigated as a scalable technique to trap and manipulate polarizable objects. This includes biological cells, nanoparticles, DNA molecules, organic or inorganic NWs and proteins using electric field gradients. In this article, we have used DEP force to localize nanowire growth by physical vapor deposition (PVD) method as well as control of NW diameter on field assisted growth of the NWs of CuTCNQ (Cu-tetracyanoquinodimethane); a metal-organic charge transfer complex material which is well known of resistive switching. We report a versatile analysis platform, based on a set of nanogap electrodes, for the controlled growth of nanowire. Non-uniform electric field and dielectrophoretic force is created in between two metal electrodes, patterned by electron beam lithography process. Suspended CuTCNQ nanowires have been grown laterally between two electrodes in the vicinity of electric field and dielectric force by applying external bias. Growth and diameter dependence of the nanowires on external bias has been investigated in the framework of these two forces by COMSOL Multiphysics simulation. This report will help successful in-situ nanodevice fabrication with constrained number of NW and diameter without any post treatment.Keywords: nanowire, dielectrophoretic force, confined growth, controlled diameter, comsol multiphysics simulation
Procedia PDF Downloads 1899123 Produce Large Surface Area Activated Carbon from Biomass for Water Treatment
Authors: Rashad Al-Gaashani
Abstract:
The physicochemical activation method was used to produce high-quality activated carbon (AC) with a large surface area of about 2000 m2/g from low-cost and abundant biomass wastes in Qatar, namely date seeds. X-Ray diffraction (XRD), scanning electron spectroscopy (SEM), energy dispersive X-Ray spectroscopy (EDS), and Brunauer-Emmett-Teller (BET) surface area analysis was used to evaluate the AC samples. AC produced from date seeds has a wide range of pores available, including micro- and nano-pores. This type of AC with a well-developed pore structure may be very attractive for different applications, including air and water purification from micro and nano pollutants. Heavy metals iron (III) and copper (II) ions were removed from wastewater using the AC produced using a batch adsorption technique. The AC produced from date seeds biomass wastes shows high removal of heavy metals such as iron (III) ions (100%) and copper (II) ions (97.25%). The highest removal of copper (II) ions (100%) with AC produced from date seeds was found at pH 8, whereas the lowest removal (22.63%) occurred at pH 2. The effect of adsorption time, adsorbent dose, and pH on the removal of heavy metals was studied.Keywords: activated carbon, date seeds, biomass, heavy metals removal, water treatment
Procedia PDF Downloads 749122 Programming without Code: An Approach and Environment to Conditions-On-Data Programming
Authors: Philippe Larvet
Abstract:
This paper presents the concept of an object-based programming language where tests (if... then... else) and control structures (while, repeat, for...) disappear and are replaced by conditions on data. According to the object paradigm, by using this concept, data are still embedded inside objects, as variable-value couples, but object methods are expressed into the form of logical propositions (‘conditions on data’ or COD).For instance : variable1 = value1 AND variable2 > value2 => variable3 = value3. Implementing this approach, a central inference engine turns and examines objects one after another, collecting all CODs of each object. CODs are considered as rules in a rule-based system: the left part of each proposition (left side of the ‘=>‘ sign) is the premise and the right part is the conclusion. So, premises are evaluated and conclusions are fired. Conclusions modify the variable-value couples of the object and the engine goes to examine the next object. The paper develops the principles of writing CODs instead of complex algorithms. Through samples, the paper also presents several hints for implementing a simple mechanism able to process this ‘COD language’. The proposed approach can be used within the context of simulation, process control, industrial systems validation, etc. By writing simple and rigorous conditions on data, instead of using classical and long-to-learn languages, engineers and specialists can easily simulate and validate the functioning of complex systems.Keywords: conditions on data, logical proposition, programming without code, object-oriented programming, system simulation, system validation
Procedia PDF Downloads 2209121 Adolescent-Parent Relationship as the Most Important Factor in Preventing Mood Disorders in Adolescents: An Application of Artificial Intelligence to Social Studies
Authors: Elżbieta Turska
Abstract:
Introduction: One of the most difficult times in a person’s life is adolescence. The experiences in this period may shape the future life of this person to a large extent. This is the reason why many young people experience sadness, dejection, hopelessness, sense of worthlessness, as well as losing interest in various activities and social relationships, all of which are often classified as mood disorders. As many as 15-40% adolescents experience depressed moods and for most of them they resolve and are not carried into adulthood. However, (5-6%) of those affected by mood disorders develop the depressive syndrome and as many as (1-3%) develop full-blown clinical depression. Materials: A large questionnaire was given to 2508 students, aged 13–16 years old, and one of its parts was the Burns checklist, i.e. the standard test for identifying depressed mood. The questionnaire asked about many aspects of the student’s life, it included a total of 53 questions, most of which had subquestions. It is important to note that the data suffered from many problems, the most important of which were missing data and collinearity. Aim: In order to identify the correlates of mood disorders we built predictive models which were then trained and validated. Our aim was not to be able to predict which students suffer from mood disorders but rather to explore the factors influencing mood disorders. Methods: The problems with data described above practically excluded using all classical statistical methods. For this reason, we attempted to use the following Artificial Intelligence (AI) methods: classification trees with surrogate variables, random forests and xgboost. All analyses were carried out with the use of the mlr package for the R programming language. Resuts: The predictive model built by classification trees algorithm outperformed the other algorithms by a large margin. As a result, we were able to rank the variables (questions and subquestions from the questionnaire) from the most to least influential as far as protection against mood disorder is concerned. Thirteen out of twenty most important variables reflect the relationships with parents. This seems to be a really significant result both from the cognitive point of view and also from the practical point of view, i.e. as far as interventions to correct mood disorders are concerned.Keywords: mood disorders, adolescents, family, artificial intelligence
Procedia PDF Downloads 1009120 Time Lag Analysis for Readiness Potential by a Firing Pattern Controller Model of a Motor Nerve System Considered Innervation and Jitter
Authors: Yuko Ishiwaka, Tomohiro Yoshida, Tadateru Itoh
Abstract:
Human makes preparation called readiness potential unconsciously (RP) before awareness of their own decision. For example, when recognizing a button and pressing the button, the RP peaks are observed 200 ms before the initiation of the movement. It has been known that the preparatory movements are acquired before actual movements, but it has not been still well understood how humans can obtain the RP during their growth. On the proposition of why the brain must respond earlier, we assume that humans have to adopt the dangerous environment to survive and then obtain the behavior to cover the various time lags distributed in the body. Without RP, humans cannot take action quickly to avoid dangerous situations. In taking action, the brain makes decisions, and signals are transmitted through the Spinal Cord to the muscles to the body moves according to the laws of physics. Our research focuses on the time lag of the neuron signal transmitting from a brain to muscle via a spinal cord. This time lag is one of the essential factors for readiness potential. We propose a firing pattern controller model of a motor nerve system considered innervation and jitter, which produces time lag. In our simulation, we adopt innervation and jitter in our proposed muscle-skeleton model, because these two factors can create infinitesimal time lag. Q10 Hodgkin Huxley model to calculate action potentials is also adopted because the refractory period produces a more significant time lag for continuous firing. Keeping constant power of muscle requires cooperation firing of motor neurons because a refractory period stifles the continuous firing of a neuron. One more factor in producing time lag is slow or fast-twitch. The Expanded Hill Type model is adopted to calculate power and time lag. We will simulate our model of muscle skeleton model by controlling the firing pattern and discuss the relationship between the time lag of physics and neurons. For our discussion, we analyze the time lag with our simulation for knee bending. The law of inertia caused the most influential time lag. The next most crucial time lag was the time to generate the action potential induced by innervation and jitter. In our simulation, the time lag at the beginning of the knee movement is 202ms to 203.5ms. It means that readiness potential should be prepared more than 200ms before decision making.Keywords: firing patterns, innervation, jitter, motor nerve system, readiness potential
Procedia PDF Downloads 8279119 Biotransformation Process for the Enhanced Production of the Pharmaceutical Agents Sakuranetin and Genkwanin: Poised to be Potent Therapeuctic Drugs
Authors: Niranjan Koirala, Sumangala Darsandhari, Hye Jin Jung, Jae Kyung Sohng
Abstract:
Sakuranetin, an antifungal agent and genkwanin, an anti-inflammatory agent, are flavonoids with several potential pharmaceutical applications. To produce such valuable flavonoids in large quantity, an Escherichia coli cell factory has been created. E. coli harboring O-methyltransferase (SaOMT2) derived from Streptomyces avermitilis was employed for regiospecific methylation of naringenin and apigenin. In order to increase the production via biotransformation, metK gene was overexpressed and the conditions were optimized. The maximum yield of sakuranetin and genkwanin under optimized conditions was 197 µM and 170 µM respectively when 200 µM of naringenin and apigenin were supplemented in the separate cultures. Furthermore, sakuranetin was purified in large scale and used as a substrate for in vitro glycosylation by YjiC to produce glucose and galactose derivatives of sakuranetin for improved solubility. We also found that unlike naringenin, sakuranetin effectively inhibits α-melanocyte stimulating hormone (α-MSH)-stimulated melanogenesis in B16F10 melanoma cells. In addition, genkwanin more potently inhibited angiogenesis than apigenin. Based on our findings, we speculate that these compounds warrant further investigation in vivo as potential new therapeutic anti-carcinogenic, anti-melanogenic and anti-angiogenic agents.Keywords: anti-carcinogenic, anti-melanogenic, glycosylation, methylation
Procedia PDF Downloads 6079118 Influence of Processing Parameters on the Reliability of Sieving as a Particle Size Distribution Measurements
Authors: Eseldin Keleb
Abstract:
In the pharmaceutical industry particle size distribution is an important parameter for the characterization of pharmaceutical powders. The powder flowability, reactivity and compatibility, which have a decisive impact on the final product, are determined by particle size and size distribution. Therefore, the aim of this study was to evaluate the influence of processing parameters on the particle size distribution measurements. Different Size fractions of α-lactose monohydrate and 5% polyvinylpyrrolidone were prepared by wet granulation and were used for the preparation of samples. The influence of sieve load (50, 100, 150, 200, 250, 300, and 350 g), processing time (5, 10, and 15 min), sample size ratios (high percentage of small and large particles), type of disturbances (vibration and shaking) and process reproducibility have been investigated. Results obtained showed that a sieve load of 50 g produce the best separation, a further increase in sample weight resulted in incomplete separation even after the extension of the processing time for 15 min. Performing sieving using vibration was rapider and more efficient than shaking. Meanwhile between day reproducibility showed that particle size distribution measurements are reproducible. However, for samples containing 70% fines or 70% large particles, which processed at optimized parameters, the incomplete separation was always observed. These results indicated that sieving reliability is highly influenced by the particle size distribution of the sample and care must be taken for samples with particle size distribution skewness.Keywords: sieving, reliability, particle size distribution, processing parameters
Procedia PDF Downloads 6119117 Optimization by Means of Genetic Algorithm of the Equivalent Electrical Circuit Model of Different Order for Li-ion Battery Pack
Authors: V. Pizarro-Carmona, S. Castano-Solis, M. Cortés-Carmona, J. Fraile-Ardanuy, D. Jimenez-Bermejo
Abstract:
The purpose of this article is to optimize the Equivalent Electric Circuit Model (EECM) of different orders to obtain greater precision in the modeling of Li-ion battery packs. Optimization includes considering circuits based on 1RC, 2RC and 3RC networks, with a dependent voltage source and a series resistor. The parameters are obtained experimentally using tests in the time domain and in the frequency domain. Due to the high non-linearity of the behavior of the battery pack, Genetic Algorithm (GA) was used to solve and optimize the parameters of each EECM considered (1RC, 2RC and 3RC). The objective of the estimation is to minimize the mean square error between the measured impedance in the real battery pack and those generated by the simulation of different proposed circuit models. The results have been verified by comparing the Nyquist graphs of the estimation of the complex impedance of the pack. As a result of the optimization, the 2RC and 3RC circuit alternatives are considered as viable to represent the battery behavior. These battery pack models are experimentally validated using a hardware-in-the-loop (HIL) simulation platform that reproduces the well-known New York City cycle (NYCC) and Federal Test Procedure (FTP) driving cycles for electric vehicles. The results show that using GA optimization allows obtaining EECs with 2RC or 3RC networks, with high precision to represent the dynamic behavior of a battery pack in vehicular applications.Keywords: Li-ion battery packs modeling optimized, EECM, GA, electric vehicle applications
Procedia PDF Downloads 1229116 Microfluidic Manipulation for Biomedical and Biohealth Applications
Authors: Reza Hadjiaghaie Vafaie, Sevda Givtaj
Abstract:
Automation and control of biological samples and solutions at the microscale is a major advantage for biochemistry analysis and biological diagnostics. Despite the known potential of miniaturization in biochemistry and biomedical applications, comparatively little is known about fluid automation and control at the microscale. Here, we study the electric field effect inside a fluidic channel and proper electrode structures with different patterns proposed to form forward, reversal, and rotational flows inside the channel. The simulation results confirmed that the ac electro-thermal flow is efficient for the control and automation of high-conductive solutions. In this research, the fluid pumping and mixing effects were numerically studied by solving physic-coupled electric, temperature, hydrodynamic, and concentration fields inside a microchannel. From an experimental point of view, the electrode structures are deposited on a silicon substrate and bonded to a PDMS microchannel to form a microfluidic chip. The motions of fluorescent particles in pumping and mixing modes were captured by using a CCD camera. By measuring the frequency response of the fluid and exciting the electrodes with the proper voltage, the fluid motions (including pumping and mixing effects) are observed inside the channel through the CCD camera. Based on the results, there is good agreement between the experimental and simulation studies.Keywords: microfluidic, nano/micro actuator, AC electrothermal, Reynolds number, micropump, micromixer, microfabrication, mass transfer, biomedical applications
Procedia PDF Downloads 589115 Experimental and Finite Element Analysis of Large Deformation Characteristics of Magnetic Responsive Hydrogel Nanocomposites Membranes
Authors: Mallikarjunachari Gangapuram
Abstract:
Stimuli-responsive hydrogel nanocomposite membranes are gaining significant attention these days due to their potential applications in various engineering fields. For example, sensors, soft actuators, drug delivery, remote controlled therapy, water treatment, shape morphing, and magnetic refrigeration are few advanced applications of hydrogel nanocomposite membranes. In this work, hydrogel nanocomposite membranes are synthesized by embedding nanometer-sized (diameter - 300 nm) Fe₃O₄ magnetic particles into the polyvinyl alcohol (PVA) polymer. To understand the large deformation characteristics of these membranes, a well-known experimental method ball indentation technique is used. Different designing parameters such as membrane thickness, the concentration of magnetic particles and ball diameter on the viscoelastic properties are studied. All the experiments are carried out without and with a static magnetic field. Finite element simulations are carried out to validate the experimental results. It is observed, the creep response decreases and Young’s modulus increases as the thickness and concentration of magnetic particles increases. Image analysis revealed the hydrogel membranes are undergone global deformation for ball diameter 18 mm and local deformation when the diameter decreases from 18 mm to 0.5 mm.Keywords: ball indentation, hydrogel membranes, nanocomposites, Young's modulus
Procedia PDF Downloads 1269114 Craniopharyngiomas: Surgical Techniques: The Combined Interhemispheric Sub-Commissural Translaminaterminalis Approach to Tumors in and Around the Third Ventricle: Neurological and Functional Outcome
Authors: Pietro Mortini, Marco Losa
Abstract:
Objective: Resection of large lesions growing into the third ventricle remains a demanding surgery, sometimes at risk of severe post-operative complications. Transcallosal and transcortical routes were considered as approaches of choice to access the third ventricle, however neurological consequences like memory loss have been reported. We report clinical results of the previously described combined interhemispheric sub-commissural translaminaterminalis approach (CISTA) for the resection of large lesions located in the third ventricle. Methods: Authors conducted a retrospective analysis on 10 patients, who were operated through the CISTA, for the resection of lesions growing into the third ventricle. Results: Total resection was achieved in all cases. Cognitive worsening occurred only in one case. No perioperative deaths were recorded and, at last follow-up, all patients were alive. One year after surgery 80% of patients had an excellent outcome with a KPS 100 and Glasgow Outcome score (GOS) Conclusion: The CISTA represents a safe and effective alternative to transcallosal and transcortical routes to resect lesions growing into the third ventricle. It allows for a multiangle trajectory to access the third ventricle with a wide working area free from critical neurovascular structures, without any section of the corpus callosum, the anterior commissure and the fornix.Keywords: craniopharingioma, surgery, sub-commissural translaminaterminalis approach (CISTA),
Procedia PDF Downloads 2929113 Lattice Boltzmann Simulation of Fluid Flow and Heat Transfer Through Porous Media by Means of Pore-Scale Approach: Effect of Obstacles Size and Arrangement on Tortuosity and Heat Transfer for a Porosity Degree
Authors: Annunziata D’Orazio, Arash Karimipour, Iman Moradi
Abstract:
The size and arrangement of the obstacles in the porous media has an influential effect on the fluid flow and heat transfer, even in the same porosity. Regarding to this, in the present study, several different amounts of obstacles, in both regular and stagger arrangements, in the analogous porosity have been simulated through a channel. In order to compare the effect of stagger and regular arrangements, as well as different quantity of obstacles in the same porosity, on fluid flow and heat transfer. In the present study, the Single Relaxation Time Lattice Boltzmann Method, with Bhatnagar-Gross-Ktook (BGK) approximation and D2Q9 model, is implemented for the numerical simulation. Also, the temperature field is modeled through a Double Distribution Function (DDF) approach. Results are presented in terms of velocity and temperature fields, streamlines, percentage of pressure drop and Nusselt number of the obstacles walls. Also, the correlation between tortuosity and Nusselt number of the obstacles walls, for both regular and staggered arrangements, has been proposed. On the other hand, the results illustrated that by increasing the amount of obstacles, as well as changing their arrangement from regular to staggered, in the same porosity, the rate of tortuosity and Nusselt number of the obstacles walls increased.Keywords: lattice boltzmann method, heat transfer, porous media, pore-scale, porosity, tortuosity
Procedia PDF Downloads 849112 An Operators’ Real-sense-based Fire Simulation for Human Factors Validation in Nuclear Power Plants
Authors: Sa-Kil Kim, Jang-Soo Lee
Abstract:
On March 31, 1993, a severe fire accident took place in a nuclear power plant located in Narora in North India. The event involved a major fire in the turbine building of NAPS unit-1 and resulted in a total loss of power to the unit for 17 hours. In addition, there was a heavy ingress of smoke in the control room, mainly through the intake of the ventilation system, forcing the operators to vacate the control room. The Narora fire accident provides us lessons indicating that operators could lose their mind and predictable behaviors during a fire. After the Fukushima accident, which resulted from a natural disaster, unanticipated external events are also required to be prepared and controlled for the ultimate safety of nuclear power plants. From last year, our research team has developed a test and evaluation facility that can simulate external events such as an earthquake and fire based on the operators’ real-sense. As one of the results of the project, we proposed a unit real-sense-based facility that can simulate fire events in a control room for utilizing a test-bed of human factor validation. The test-bed has the operator’s workstation shape and functions to simulate fire conditions such as smoke, heat, and auditory alarms in accordance with the prepared fire scenarios. Furthermore, the test-bed can be used for the operators’ training and experience.Keywords: human behavior in fire, human factors validation, nuclear power plants, real-sense-based fire simulation
Procedia PDF Downloads 2829111 Molecular Dynamics Simulation of the Effect of the Solid Gas Interface Nanolayer on Enhanced Thermal Conductivity of Copper-CO2 Nanofluid
Authors: Zeeshan Ahmed, Ajinkya Sarode, Pratik Basarkar, Atul Bhargav, Debjyoti Banerjee
Abstract:
The use of CO2 in oil recovery and in CO2 capture and storage is gaining traction in recent years. These applications involve heat transfer between CO2 and the base fluid, and hence, there arises a need to improve the thermal conductivity of CO2 to increase the process efficiency and reduce cost. One way to improve the thermal conductivity is through nanoparticle addition in the base fluid. The nanofluid model in this study consisted of copper (Cu) nanoparticles in varying concentrations with CO2 as a base fluid. No experimental data are available on thermal conductivity of CO2 based nanofluid. Molecular dynamics (MD) simulations are an increasingly adopted tool to perform preliminary assessments of nanoparticle (NP) fluid interactions. In this study, the effect of the formation of a nanolayer (or molecular layering) at the gas-solid interface on thermal conductivity is investigated using equilibrium MD simulations by varying NP diameter and keeping the volume fraction (1.413%) of nanofluid constant to check the diameter effect of NP on the nanolayer and thermal conductivity. A dense semi-solid fluid layer was seen to be formed at the NP-gas interface, and the thickness increases with increase in particle diameter, which also moves with the NP Brownian motion. Density distribution has been done to see the effect of nanolayer, and its thickness around the NP. These findings are extremely beneficial, especially to industries employed in oil recovery as increased thermal conductivity of CO2 will lead to enhanced oil recovery and thermal energy storage.Keywords: copper-CO2 nanofluid, molecular dynamics simulation, molecular interfacial layer, thermal conductivity
Procedia PDF Downloads 3359110 The Evaluation of Current Pile Driving Prediction Methods for Driven Monopile Foundations in London Clay
Authors: John Davidson, Matteo Castelletti, Ismael Torres, Victor Terente, Jamie Irvine, Sylvie Raymackers
Abstract:
The current industry approach to pile driving predictions consists of developing a model of the hammer-pile-soil system which simulates the relationship between soil resistance to driving (SRD) and blow counts (or pile penetration per blow). The SRD methods traditionally used are broadly based on static pile capacity calculations. The SRD is used in combination with the one-dimensional wave equation model to indicate the anticipated blowcounts with depth for specific hammer energy settings. This approach has predominantly been calibrated on relatively long slender piles used in the oil and gas industry but is now being extended to allow calculations to be undertaken for relatively short rigid large diameter monopile foundations. This paper evaluates the accuracy of current industry practice when applied to a site where large diameter monopiles were installed in predominantly stiff fissured clay. Actual geotechnical and pile installation data, including pile driving records and signal matching analysis (based upon pile driving monitoring techniques), were used for the assessment on the case study site.Keywords: driven piles, fissured clay, London clay, monopiles, offshore foundations
Procedia PDF Downloads 2239109 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland
Authors: Alireza Ansariyar, Safieh Laaly
Abstract:
Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates CAVs’ fuel consumption and air pollutants (C.O., PM, and NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.Keywords: connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models
Procedia PDF Downloads 4429108 Uncovering the Complex Structure of Building Design Process Based on Royal Institute of British Architects Plan of Work
Authors: Fawaz A. Binsarra, Halim Boussabaine
Abstract:
The notion of complexity science has been attracting the interest of researchers and professionals due to the need of enhancing the efficiency of understanding complex systems dynamic and structure of interactions. In addition, complexity analysis has been used as an approach to investigate complex systems that contains a large number of components interacts with each other to accomplish specific outcomes and emerges specific behavior. The design process is considered as a complex action that involves large number interacted components, which are ranked as design tasks, design team, and the components of the design process. Those three main aspects of the building design process consist of several components that interact with each other as a dynamic system with complex information flow. In this paper, the goal is to uncover the complex structure of information interactions in building design process. The Investigating of Royal Institute of British Architects Plan Of Work 2013 information interactions as a case study to uncover the structure and building design process complexity using network analysis software to model the information interaction will significantly enhance the efficiency of the building design process outcomes.Keywords: complexity, process, building desgin, Riba, design complexity, network, network analysis
Procedia PDF Downloads 5259107 Fault Analysis of Induction Machine Using Finite Element Method (FEM)
Authors: Wiem Zaabi, Yemna Bensalem, Hafedh Trabelsi
Abstract:
The paper presents a finite element (FE) based efficient analysis procedure for induction machine (IM). The FE formulation approaches are proposed to achieve this goal: the magnetostatic and the non-linear transient time stepped formulations. The study based on finite element models offers much more information on the phenomena characterizing the operation of electrical machines than the classical analytical models. This explains the increase of the interest for the finite element investigations in electrical machines. Based on finite element models, this paper studies the influence of the stator and the rotor faults on the behavior of the IM. In this work, a simple dynamic model for an IM with inter-turn winding fault and a broken bar fault is presented. This fault model is used to study the IM under various fault conditions and severity. The simulation results are conducted to validate the fault model for different levels of fault severity. The comparison of the results obtained by simulation tests allowed verifying the precision of the proposed FEM model. This paper presents a technical method based on Fast Fourier Transform (FFT) analysis of stator current and electromagnetic torque to detect the faults of broken rotor bar. The technique used and the obtained results show clearly the possibility of extracting signatures to detect and locate faults.Keywords: Finite element Method (FEM), Induction motor (IM), short-circuit fault, broken rotor bar, Fast Fourier Transform (FFT) analysis
Procedia PDF Downloads 2979106 Prediction of the Aerodynamic Stall of a Helicopter’s Main Rotor Using a Computational Fluid Dynamics Analysis
Authors: Assel Thami Lahlou, Soufiane Stouti, Ismail Lagrat, Hamid Mounir, Oussama Bouazaoui
Abstract:
The purpose of this research work is to predict the helicopter from stalling by finding the minimum and maximum values that the pitch angle can take in order to fly in a hover state condition. The stall of a helicopter in hover occurs when the pitch angle is too small to generate the thrust required to support its weight or when the critical angle of attack that gives maximum lift is reached or exceeded. In order to find the minimum pitch angle, a 3D CFD simulation was done in this work using ANSYS FLUENT as the CFD solver. We started with a small value of the pitch angle θ, and we kept increasing its value until we found the thrust coefficient required to fly in a hover state and support the weight of the helicopter. For the CFD analysis, the Multiple Reference Frame (MRF) method with k-ε turbulent model was used to study the 3D flow around the rotor for θmin. On the other hand, a 2D simulation of the airfoil NACA 0012 was executed with a velocity inlet Vin=ΩR/2 to visualize the flow at the location span R/2 of the disk rotor using the Spallart-Allmaras turbulent model. Finding the critical angle of attack at this position will give us the ability to predict the stall in hover flight. The results obtained will be exposed later in the article. This study was so useful in analyzing the limitations of the helicopter’s main rotor and thus, in predicting accidents that can lead to a lot of damage.Keywords: aerodynamic, CFD, helicopter, stall, blades, main rotor, minimum pitch angle, maximum pitch angle
Procedia PDF Downloads 77