Search results for: face comparison
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7709

Search results for: face comparison

689 The Aromaticity of P-Substituted O-(N-Dialkyl)Aminomethylphenols

Authors: Khodzhaberdi Allaberdiev

Abstract:

Aromaticity, one of the most important concepts in organic chemistry, has attracted considerable interest from both experimentalists and theoreticians. The geometry optimization of p-substituted o-(N-dialkyl)aminomethylphenols, o-DEAMPH XC₆ H₅CH ₂Y (X=p-OCH₃, CH₃, H, F, Cl, Br, COCH₃, COOCH₃, CHO, CN and NO₂, Y=o-N (C₂H₅)₂, o-DEAMPHs have been performed in the gas phase using the B3LYP/6-311+G(d,p) level. Aromaticities of the considered molecules were investigated using different indices included geometrical (HOMA and Bird), electronic (FLU, PDI and SA) magnetic (NICS(0), NICS(1) and NICS(1)zz indices. The linear dependencies were obtained between some aromaticity indices. The best correlation is observed between the Bird and PDI indices (R² =0.9240). However, not all types of indices or even different indices within the same type correlate well among each other. Surprisingly, for studied molecules in which geometrical and electronic cannot correctly give the aromaticity of ring, the magnetism based index successfully predicts the aromaticity of systems. 1H NMR spectra of compounds were obtained at B3LYP/6–311+G(d,p) level using the GIAO method. Excellent linear correlation (R²= 0.9996) between values the chemical shift of hydrogen atom obtained experimentally of 1H NMR and calculated using B3LYP/6–311+G(d,p) demonstrates a good assignment of the experimental values chemical shift to the calculated structures of o-DEAMPH. It is found that the best linear correlation with the Hammett substituent constants is observed for the NICS(1)zz index in comparison with the other indices: NICS(1)zz =-21.5552+1,1070 σp- (R²=0.9394). The presence intramolecular hydrogen bond in the studied molecules also revealed changes the aromatic character of substituted o-DEAMPHs. The HOMA index predicted for R=NO2 the reduction in the π-electron delocalization of 3.4% was about double that observed for p-nitrophenol. The influence intramolecular H-bonding on aromaticity of benzene ring in the ground state (S0) are described by equations between NICS(1)zz and H-bond energies: experimental, Eₑₓₚ, predicted IR spectroscopical, Eν and topological, EQTAIM with correlation coefficients R² =0.9666, R² =0.9028 and R² =0.8864, respectively. The NICS(1)zz index also correlates with usual descriptors of the hydrogen bond, while the other indices do not give any meaningful results. The influence of the intramolecular H-bonding formation on the aromaticity of some substituted o-DEAMPHs is criteria to consider the multidimensional character of aromaticity. The linear relationships as well as revealed between NICS(1)zz and both pyramidality nitrogen atom, ΣN(C₂H₅)₂ and dihedral angle, φ CAr – CAr -CCH₂ –N, to characterizing out-of-plane properties.These results demonstrated the nonplanar structure of o-DEAMPHs. Finally, when considering dependencies of NICS(1)zz, were excluded data for R=H, because the NICS(1) and NICS(1)zz values are the most negative for unsubstituted DEAMPH, indicating its highest aromaticity; that was not the case for NICS(0) index.

Keywords: aminomethylphenols, DFT, aromaticity, correlations

Procedia PDF Downloads 181
688 A Preliminary Study on the Effects of Lung Impact on Ballistic Thoracic Trauma

Authors: Amy Pullen, Samantha Rodrigues, David Kieser, Brian Shaw

Abstract:

The aim of the study was to determine if a projectile interacting with the lungs increases the severity of injury in comparison to a projectile interacting with the ribs or intercostal muscle. This comparative study employed a 10% gelatine based model with either porcine ribs or balloons embedded to represent a lung. Four sample groups containing five samples were evaluated; these were control (plain gel), intercostal impact, rib impact, and lung impact. Two ammunition natures were evaluated at a range of 10m; these were 5.56x45mm and 7.62x51mm. Aspects of projectile behavior were quantified including exiting projectile weight, location of yawing, projectile fragmentation and distribution, location and area of the temporary cavity, permanent cavity formation, and overall energy deposition. Major findings included the cavity showing a higher percentage of the projectile weight exit the block than the intercostal and ribs, but similar to the control for the 5.56mm ammunition. However, for the 7.62mm ammunition, the lung was shown to have a higher percentage of the projectile weight exit the block than the control, intercostal and ribs. The total weight of projectile fragments as a function of penetration depth revealed large fluctuations and significant intra-group variation for both ammunition natures. Despite the lack of a clear trend, both plots show that the lung leads to greater projectile fragments exiting the model. The lung was shown to have a later center of the temporary cavity than the control, intercostal and ribs for both ammunition types. It was also shown to have a similar temporary cavity volume to the control, intercostal and ribs for the 5.56mm ammunition and a similar temporary cavity to the intercostal for the 7.62mm ammunition The lung was shown to leave a similar projectile tract than the control, intercostal and ribs for both ammunition types. It was also shown to have larger shear planes than the control and the intercostal, but similar to the ribs for the 5.56mm ammunition, whereas it was shown to have smaller shear planes than the control but similar shear planes to the intercostal and ribs for the 7.62mm ammunition. The lung was shown to have less energy deposited than the control, intercostal and ribs for both ammunition types. This comparative study provides insights into the influence of the lungs on thoracic gunshot trauma. It indicates that the lungs limits projectile deformation and causes a later onset of yawing and subsequently limits the energy deposited along the wound tract creating a deeper and smaller cavity. This suggests that lung impact creates an altered pattern of local energy deposition within the target which will affect the severity of trauma.

Keywords: ballistics, lung, trauma, wounding

Procedia PDF Downloads 170
687 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 192
686 Prospects for the Development of e-Commerce in Georgia

Authors: Nino Damenia

Abstract:

E-commerce opens a new horizon for business development, which is why the presence of e-commerce is a necessary condition for the formation, growth, and development of the country's economy. Worldwide, e-commerce turnover is growing at a high rate every year, as the electronic environment provides great opportunities for product promotion. E-commerce in Georgia is developing at a fast pace, but it is still a relatively young direction in the country's economy. Movement restrictions and other public health measures caused by the COVID-19 pandemic have reduced economic activity in most economic sectors and countries, significantly affecting production, distribution, and consumption. The pandemic has accelerated digital transformation. Digital solutions enable people and businesses to continue part of their economic and social activities remotely. This has also led to the growth of e-commerce. According to the data of the National Statistics Service of Georgia, the share of online trade is higher in cities (27.4%) than in rural areas (9.1%). The COVID-19 pandemic has forced local businesses to expand their digital offerings. The size of the local market increased 3.2 times in 2020 to 138 million GEL. And in 2018-2020, the share of local e-commerce increased from 11% to 23%. In Georgia, the state is actively engaged in the promotion of activities based on information technologies. Many measures have been taken for this purpose, but compared to other countries, this process is slow in Georgia. The purpose of the study is to determine development prospects for the economy of Georgia based on the analysis of electronic commerce. Research was conducted around the issues using Georgian and foreign scientists' articles, works, reports of international organizations, collections of scientific conferences, and scientific electronic databases. The empirical base of the research is the data and annual reports of the National Statistical Service of Georgia, internet resources of world statistical materials, and others. While working on the article, a questionnaire was developed, based on which an electronic survey of certain types of respondents was conducted. The conducted research was related to determining how intensively Georgian citizens use online shopping, including which age category uses electronic commerce, for what purposes, and how satisfied they are. Various theoretical and methodological research tools, as well as analysis, synthesis, comparison, and other types of methods, are used to achieve the set goal in the research process. The research results and recommendations will contribute to the development of e-commerce in Georgia and economic growth based on it.

Keywords: e-commerce, information technology, pandemic, digital transformation

Procedia PDF Downloads 75
685 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 63
684 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 134
683 Development of a Bioprocess Technology for the Production of Vibrio midae, a Probiotic for Use in Abalone Aquaculture

Authors: Ghaneshree Moonsamy, Nodumo N. Zulu, Rajesh Lalloo, Suren Singh, Santosh O. Ramchuran

Abstract:

The abalone industry of South Africa is under severe pressure due to illegal harvesting and poaching of this seafood delicacy. These abalones are harvested excessively; as a result, these animals do not have a chance to replace themselves in their habitats, ensuing in a drastic decrease in natural stocks of abalone. Abalone has an extremely slow growth rate and takes approximately four years to reach a size that is market acceptable; therefore, it was imperative to investigate methods to boost the overall growth rate and immunity of the animal. The University of Cape Town (UCT) began to research, which resulted in the isolation of two microorganisms, a yeast isolate Debaryomyces hansenii and a bacterial isolate Vibrio midae, from the gut of the abalone and characterised them for their probiotic abilities. This work resulted in an internationally competitive concept technology that was patented. The next stage of research was to develop a suitable bioprocess to enable commercial production. Numerous steps were taken to develop an efficient production process for V. midae, one of the isolates found by UCT. The initial stages of research involved the development of a stable and robust inoculum and the optimization of physiological growth parameters such as temperature and pH. A range of temperature and pH conditions were evaluated, and data obtained revealed an optimum growth temperature of 30ᵒC and a pH of 6.5. Once these critical growth parameters were established further media optimization studies were performed. Corn steep liquor (CSL) and high test molasses (HTM) were selected as suitable alternatives to more expensive, conventionally used growth medium additives. The optimization of CSL (6.4 g.l⁻¹) and HTM (24 g.l⁻¹) concentrations in the growth medium resulted in a 180% increase in cell concentration, a 5716-fold increase in cell productivity and a 97.2% decrease in the material cost of production in comparison to conventional growth conditions and parameters used at the onset of the study. In addition, a stable market-ready liquid probiotic product, encompassing the viable but not culturable (VBNC) state of Vibrio midae cells, was developed during the downstream processing aspect of the study. The demonstration of this technology at a full manufacturing scale has further enhanced the attractiveness and commercial feasibility of this production process.

Keywords: probiotics, abalone aquaculture, bioprocess technology, manufacturing scale technology development

Procedia PDF Downloads 152
682 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 81
681 Modeling Aerosol Formation in an Electrically Heated Tobacco Product

Authors: Markus Nordlund, Arkadiusz K. Kuczaj

Abstract:

Philip Morris International (PMI) is developing a range of novel tobacco products with the potential to reduce individual risk and population harm in comparison to smoking cigarettes. One of these products is the Tobacco Heating System 2.2 (THS 2.2), (named as the Electrically Heated Tobacco System (EHTS) in this paper), already commercialized in a number of countries (e.g., Japan, Italy, Switzerland, Russia, Portugal and Romania). During use, the patented EHTS heats a specifically designed tobacco product (Electrically Heated Tobacco Product (EHTP)) when inserted into a Holder (heating device). The EHTP contains tobacco material in the form of a porous plug that undergoes a controlled heating process to release chemical compounds into vapors, from which an aerosol is formed during cooling. The aim of this work was to investigate the aerosol formation characteristics for realistic operating conditions of the EHTS as well as for relevant gas mixture compositions measured in the EHTP aerosol consisting mostly of water, glycerol and nicotine, but also other compounds at much lower concentrations. The nucleation process taking place in the EHTP during use when operated in the Holder has therefore been modeled numerically using an extended Classical Nucleation Theory (CNT) for multicomponent gas mixtures. Results from the performed simulations demonstrate that aerosol droplets are formed only in the presence of an aerosol former being mainly glycerol. Minor compounds in the gas mixture were not able to reach a supersaturated state alone and therefore could not generate aerosol droplets from the multicomponent gas mixture at the operating conditions simulated. For the analytically characterized aerosol composition and estimated operating conditions of the EHTS and EHTP, glycerol was shown to be the main aerosol former triggering the nucleation process in the EHTP. This implies that according to the CNT, an aerosol former, such as glycerol needs to be present in the gas mixture for an aerosol to form under the tested operating conditions. To assess if these conclusions are sensitive to the initial amount of the minor compounds and to include and represent the total mass of the aerosol collected during the analytical aerosol characterization, simulations were carried out with initial masses of the minor compounds increased by as much as a factor of 500. Despite this extreme condition, no aerosol droplets were generated when glycerol, nicotine and water were treated as inert species and therefore not actively contributing to the nucleation process. This implies that according to the CNT, an aerosol cannot be generated without the help of an aerosol former, from the multicomponent gas mixtures at the compositions and operating conditions estimated for the EHTP, even if all minor compounds are released or generated in a single puff.

Keywords: aerosol, classical nucleation theory (CNT), electrically heated tobacco product (EHTP), electrically heated tobacco system (EHTS), modeling, multicomponent, nucleation

Procedia PDF Downloads 277
680 Evaluating of Chemical Extractants for Assessment of Bioavailable Heavy Metals in Polluted Soils

Authors: Violina Angelova, Krasimir Ivanov, Stefan Krustev, Dimitar Dimitrov

Abstract:

Availability of a metal is characterised by its quantity transgressing from soil into different extractants or by its content in plants. In literature, the terms 'available forms of compounds' and 'mobile' are often considered as equivalents of the term 'accessible' to plants. Rapid and a sufficiently reliable method for defining the accessible for plants forms turns out to be their extraction through different extractants, imitating the functioning of the root system. As a criterion for the pertinence of the extractant to this purpose usually serves the significant statistic correlation between the extracted quantities of the element from soil and its content in plants. The aim of this work was to evaluate the effectiveness of various extractions (DTPA-TEA, AB-DTPA, Mehlich 3, 0.01 M CaCl₂, 1M NH₄NO₃) for the determination of bioavailability of heavy metals in industrially polluted soils from the metallurgical activity near Plovdiv and Kardjali, Bulgaria. Quantity measurements for contents of heavy metals were performed with ICP-OES. The results showed that extraction capacity was as follows: Mehlich 3>ABDTPA>DTPA-TEA>CaCl₂>NaNO₃. The content of the mobile form of heavy metals depends on the nature of metal ion, the nature of extractant and pH. The obtained results show that CaCl₂ extracts a greater quantity of mobile forms of heavy metals than NH₄NO₃. DTPA-TEA and AB-DTPA are capable of extracting from the soil not only the heavy metals participating in the exchange processes but also the heavy metals bound in carbonates and organic complexes, as well as bound and occluded in oxide and secondary clay minerals. AB-DTPA extracts a bit more heavy metals than DTPA-TEA. The darker color of the solutions obtained with AB-DTPA indicates that considerable quantities organic matter are being destructed. A comparison of the mobile forms of heavy metals extracted from clean and highly polluted soils has revealed that in the polluted soils the greater portion of heavy metals exists in a mobile form. High correlation coefficients are obtained between the metals extracted with different extractants and their total content in soil (r=0.9). A positive correlation between the pH, soil organic matter and the extracted quantities of heavy metals has been found. The results of correlation analysis revealed that the heavy metals extracted by DTPA-TEA, AB-DTPA, Mehlich 3, CaCl₂ and NaNO₃ correlated significantly with plant uptake. Significant correlation was found between DTPA-TEA, AB-DTPA, and CaCl₂ with heavy metals concentration in plants. Application of extracting methods contains chelating agents would be recommended in the future research onthe availabilityof heavy metals in polluted soils.

Keywords: availability, chemical extractants, heavy metals, mobile forms

Procedia PDF Downloads 355
679 Assessing the Impact of Additional Information during Motor Preparation in Lane Change Task

Authors: Nikita Rajendra Sharma, Jai Prakash Kushvah, Gerhard Rinkenauer

Abstract:

Driving a car is a discrete aiming movement in which drivers aim at successful extraction of relevant information and elimination of potentially distracting one. It is the motor preparation which enables one to react to certain stimuli onsite by allowing perceptual process for optimal adjustment. Drivers prepare their responses according to the available resources of advanced and ongoing information to drive efficiently. It requires constant programming and reprogramming of the motor system. The reaction time (RT) is shorter when a response signal is preceded by a warning signal. The reason behind this reduced time in responding to targets is that the warning signal causes the participant to prepare for the upcoming response by updating the motor program before the execution. While performing the primary task of changing lanes while driving, the simultaneous occurrence of additional information during the presentation of cues (congruent or incongruent with respect to target cue) might impact the motor preparation and execution. The presence of additional information (other than warning or response signal) between warning signal and imperative stimulus influences human motor preparation to a reasonable extent. The present study was aimed to assess the impact of congruent and incongruent additional information (with respect to imperative stimulus) on driving performance (reaction time, steering wheel amplitude, and steering wheel duration) during a lane change task. implementing movement pre-cueing paradigm. 22 young valid car-drivers (Mage = 24.1+/- 3.21 years, M = 10, F = 12, age-range 21-33 years) participated in the study. The study revealed that additional information influenced the overall driving performance as potential distractors and relevant information. Findings suggest that the events of additional information relatively influenced the reaction time and steering wheel angle as potential distractor or irrelevant information. Participants took longer to respond, and higher steering wheel angles were reported for targets coupled with additional information in comparison with warning signs preceded by potential distractors and the participants' response time was more for a higher number of lanes (2 Lanes > 1 Lane). The same additional information appearing interchangeably at warning signals and targets worked as relevant information facilitating the motor programming in the trails where they were congruent with the direction of lane change direction.

Keywords: additional information, lane change task, motor preparation, movement pre-cueing, reaction time, steering wheel amplitude

Procedia PDF Downloads 191
678 TNF-Alpha and MDA Levels in Hearts of Cholesterol-Fed Rats Supplemented with Extra Virgin Olive Oil or Sunflower Oil, in Either Commercial or Modified Forms

Authors: Ageliki I. Katsarou, Andriana C. Kaliora, Antonia Chiou, Apostolos Papalois, Nick Kalogeropoulos, Nikolaos K. Andrikopoulos

Abstract:

Oxidative stress is a major mechanism underlying CVDs while inflammation, an intertwined process with oxidative stress, is also linked to CVDs. Extra virgin olive oil (EVOO) is widely known to play a pivotal role in CVD prevention and CVD reduction. However, in most studies, olive oil constituents are evaluated individually and not as part of the native food, hence potential synergistic effects as drivers of EVOO beneficial properties may be underestimated. In this study, EVOO lipidic and polar phenolics fractions were evaluated for their effect on inflammatory (TNF-alpha) and oxidation (malondialdehyde/MDA) markers, in cholesterol-fed rats. Thereat, oils with discernible lipidic profile and polar phenolic content were used. Wistar rats were fed on either a high-cholesterol diet (HCD) or a HCD supplemented with oils, either commercially available, i.e. EVOO, sunflower oil (SO), or modified as to their polar phenol content, i.e. phenolics deprived-EVOO (EVOOd), SO enriched with the EVOO phenolics (SOe). After 9 weeks of dietary intervention, heart and blood samples were collected. HCD induced dylipidemia shown by increase in serum total cholesterol, low-density lipoprotein cholesterol (LDL-c) and triacylglycerols. Heart tissue has been affected by dyslipidemia; oxidation was indicated by increase in MDA in cholesterol-fed rats and inflammation by increase in TNF-alpha. In both cases, this augmentation was attenuated in EVOO and SOe diets. With respect to oxidation, SO enrichment with the EVOO phenolics brought its lipid peroxidation levels as low as in EVOO-fed rats. This suggests that phenolic compounds may act as antioxidant agents in rat heart. A possible mechanism underlying this activity may be the protective effect of phenolics in mitochondrial membrane against oxidative damage. This was further supported by EVOO/EVOOd comparison with the former presenting lower heart MDA content. As for heart inflammation, phenolics naturally present in EVOO as well as phenolics chemically added in SO, exhibited quenching abilities in heart TNF-alpha levels of cholesterol-fed rats. TNF-alpha may have played a causative role in oxidative stress induction while the opposite may have also happened, hence setting up a vicious cycle. Overall, diet supplementation with EVOO or SOe attenuated hypercholesterolemia-induced increase in MDA and TNF-alpha in Wistar rat hearts. This is attributed to phenolic compounds either naturally existing in olive oil or as fortificants in seed oil.

Keywords: extra virgin olive oil, hypercholesterolemic rats, MDA, polar phenolics, TNF-alpha

Procedia PDF Downloads 498
677 Creating Renewable Energy Investment Portfolio in Turkey between 2018-2023: An Approach on Multi-Objective Linear Programming Method

Authors: Berker Bayazit, Gulgun Kayakutlu

Abstract:

The World Energy Outlook shows that energy markets will substantially change within a few forthcoming decades. First, determined action plans according to COP21 and aim of CO₂ emission reduction have already impact on policies of countries. Secondly, swiftly changed technological developments in the field of renewable energy will be influential upon medium and long-term energy generation and consumption behaviors of countries. Furthermore, share of electricity on global energy consumption is to be expected as high as 40 percent in 2040. Electrical vehicles, heat pumps, new electronical devices and digital improvements will be outstanding technologies and innovations will be the testimony of the market modifications. In order to meet highly increasing electricity demand caused by technologies, countries have to make new investments in the field of electricity production, transmission and distribution. Specifically, electricity generation mix becomes vital for both prevention of CO₂ emission and reduction of power prices. Majority of the research and development investments are made in the field of electricity generation. Hence, the prime source diversity and source planning of electricity generation are crucial for improving the wealth of citizen life. Approaches considering the CO₂ emission and total cost of generation, are necessary but not sufficient to evaluate and construct the product mix. On the other hand, employment and positive contribution to macroeconomic values are important factors that have to be taken into consideration. This study aims to constitute new investments in renewable energies (solar, wind, geothermal, biogas and hydropower) between 2018-2023 under 4 different goals. Therefore, a multi-objective programming model is proposed to optimize the goals of minimizing the CO₂ emission, investment amount and electricity sales price while maximizing the total employment and positive contribution to current deficit. In order to avoid the user preference among the goals, Dinkelbach’s algorithm and Guzel’s approach have been combined. The achievements are discussed with comparison to the current policies. Our study shows that new policies like huge capacity allotment might be discussible although obligation for local production is positive. The improvements in grid infrastructure and re-design support for the biogas and geothermal can be recommended.

Keywords: energy generation policies, multi-objective linear programming, portfolio planning, renewable energy

Procedia PDF Downloads 244
676 Effect of Carbide Precipitates in Tool Steel on Material Transfer: A Molecular Dynamics Study

Authors: Ahmed Tamer AlMotasem, Jens Bergström, Anders Gåård, Pavel Krakhmalev, Thijs Jan Holleboom

Abstract:

In sheet metal forming processes, accumulation and transfer of sheet material to tool surfaces, often referred to as galling, is the major cause of tool failure. Initiation of galling is assumed to occur due to local adhesive wear between two surfaces. Therefore, reducing adhesion between the tool and the work sheet has a great potential to improve the tool materials galling resistance. Experimental observations and theoretical studies show that the presence of primary micro-sized carbides and/or nitrides in alloyed steels may significantly improve galling resistance. Generally, decreased adhesion between the ceramic precipitates and the sheet material counter-surface are attributed as main reason to the latter observations. On the other hand, adhesion processes occur at an atomic scale and, hence, fundamental understanding of galling can be obtained via atomic scale simulations. In the present study, molecular dynamics simulations are used, with utilizing second nearest neighbor embedded atom method potential to investigate the influence of nano-sized cementite precipitates embedded in tool atoms. The main aim of the simulations is to gain new fundamental knowledge on galling initiation mechanisms. Two tool/work piece configurations, iron/iron and iron-cementite/iron, are studied under dry sliding conditions. We find that the average frictional force decreases whereas the normal force increases for the iron-cementite/iron system, in comparison to the iron/iron configuration. Moreover, the average friction coefficient between the tool/work-piece decreases by about 10 % for the iron-cementite/iron case. The increase of the normal force in the case of iron-cementite/iron system may be attributed to the high stiffness of cementite compared to bcc iron. In order to qualitatively explain the effect of cementite on adhesion, the adhesion force between self-mated iron/iron and cementite/iron surfaces has been determined and we found that iron/cementite surface exhibits lower adhesive force than that of iron-iron surface. The variation of adhesion force with temperature was investigated up to 600 K and we found that the adhesive force, generally, decreases with increasing temperature. Structural analyses show that plastic deformation is the main deformation mechanism of the work-piece, accompanied with dislocations generation.

Keywords: adhesion, cementite, galling, molecular dynamics

Procedia PDF Downloads 301
675 Method for Selecting and Prioritising Smart Services in Manufacturing Companies

Authors: Till Gramberg, Max Kellner, Erwin Gross

Abstract:

This paper presents a comprehensive investigation into the topic of smart services and IIoT-Platforms, focusing on their selection and prioritization in manufacturing organizations. First, a literature review is conducted to provide a basic understanding of the current state of research in the area of smart services. Based on discussed and established definitions, a definition approach for this paper is developed. In addition, value propositions for smart services are identified based on the literature and expert interviews. Furthermore, the general requirements for the provision of smart services are presented. Subsequently, existing approaches for the selection and development of smart services are identified and described. In order to determine the requirements for the selection of smart services, expert opinions from successful companies that have already implemented smart services are collected through semi-structured interviews. Based on the results, criteria for the evaluation of existing methods are derived. The existing methods are then evaluated according to the identified criteria. Furthermore, a novel method for the selection of smart services in manufacturing companies is developed, taking into account the identified criteria and the existing approaches. The developed concept for the method is verified in expert interviews. The method includes a collection of relevant smart services identified in the literature. The actual relevance of the use cases in the industrial environment was validated in an online survey. The required data and sensors are assigned to the smart service use cases. The value proposition of the use cases is evaluated in an expert workshop using different indicators. Based on this, a comparison is made between the identified value proposition and the required data, leading to a prioritization process. The prioritization process follows an established procedure for evaluating technical decision-making processes. In addition to the technical requirements, the prioritization process includes other evaluation criteria such as the economic benefit, the conformity of the new service offering with the company strategy, or the customer retention enabled by the smart service. Finally, the method is applied and validated in an industrial environment. The results of these experiments are critically reflected upon and an outlook on future developments in the area of smart services is given. This research contributes to a deeper understanding of the selection and prioritization process as well as the technical considerations associated with smart service implementation in manufacturing organizations. The proposed method serves as a valuable guide for decision makers, helping them to effectively select the most appropriate smart services for their specific organizational needs.

Keywords: smart services, IIoT, industrie 4.0, IIoT-platform, big data

Procedia PDF Downloads 88
674 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050

Authors: Farzaneh Sasanpour, Saeed Amini Varaki

Abstract:

Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.

Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran

Procedia PDF Downloads 67
673 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries

Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman

Abstract:

There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.

Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems

Procedia PDF Downloads 148
672 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 116
671 Innovative Grafting of Polyvinylpyrrolidone onto Polybenzimidazole Proton Exchange Membranes for Enhanced High-Temperature Fuel Cell Performance

Authors: Zeyu Zhou, Ziyu Zhao, Xiaochen Yang, Ling AI, Heng Zhai, Stuart Holmes

Abstract:

As a promising sustainable alternative to traditional fossil fuels, fuel cell technology is highly favoured due to its enhanced working efficiency and reduced emissions. In the context of high-temperature fuel cells (operating above 100 °C), the most commonly used proton exchange membrane (PEM) is the Polybenzimidazole (PBI) doped phosphoric acid (PA) membrane. Grafting is a promising strategy to advance PA-doped PBI PEM technology. The existing grafting modification on PBI PEMs mainly focuses on grafting phosphate-containing or alkaline groups onto the PBI molecular chains. However, quaternary ammonium-based grafting approaches face a common challenge. To initiate the N-alkylation reaction, deacidifying agents such as NaH, NaOH, KOH, K2CO3, etc., can lead to ionic crosslinking between the quaternary ammonium group and PBI. Polyvinylpyrrolidone (PVP) is another widely used polymer, the N-heterocycle groups within PVP endow it with a significant ability to absorb PA. Recently, PVP has attracted substantial attention in the field of fuel cells due to its reduced environmental impact and impressive fuel cell performance. However, due to the the poor compatibility of PVP in PBI, few research apply PVP in PA-doped PBI PEMs. This work introduces an innovative strategy to graft PVP onto PBI to form a network-like polymer. Due to the absence of quaternary ammonium groups, PVP does not pose issues related to crosslinking with PBI. Moreover, the nitrogen-containing functional groups on PVP provide PBI with a robust phosphoric acid retention ability. The nuclear magnetic resonance (NMR) hydrogen spectrum analysis results indicate the successful completion of the grafting reaction where N-alkylation reactions happen on both sides of the grafting agent 1,4-bis(chloromethyl)benzene. On one side, the reaction takes place with the hydrogen atoms on the imidazole groups of PBI, while on the other side, it reacts with the terminal amino group of PVP. The XPS results provide additional evidence from the perspective of the element. On synthesized PBI-g-PVP surfaces, there is an absence of chlorine (chlorine in grafting agent 1,4-bis(chloromethyl)benzene is substituted) element but a presence of sulfur element (sulfur element in terminal amino PVP appears in PBI), which demonstrates the occurrence of the grafting reaction and PVP is successfully grafted onto PBI. Prepare these modified membranes into MEA. It was found that during the fuel cell operation, all the grafted membranes showed substantial improvement in maximum current density and peak power density compared to unmodified one. For PBI-g-PVP 30, with a grafting degree of 22.4%, the peak power density reaches 1312 mW cm⁻², marking a 59.6% enhancement compared to the pristine PBI membrane. The improvement is caused by the improved PA binding ability of the membrane after grafting. The AST test result shows that the grafting membranes have better long-term durability and performance than unmodified membranes attributed to the presence of added PA binding sites, which can effectively prevent the PA leaching caused by proton migration. In conclusion, the test results indicate that grafting PVP onto PBI is a promising strategy which can effectively improve the fuel cell performance.

Keywords: fuel cell, grafting modification, PA doping ability, PVP

Procedia PDF Downloads 79
670 Identifying Large-Scale Photovoltaic and Concentrated Solar Power Hot Spots: Multi-Criteria Decision-Making Framework

Authors: Ayat-Allah Bouramdane

Abstract:

Solar Photovoltaic (PV) and Concentrated Solar Power (CSP) do not burn fossil fuels and, therefore, could meet the world's needs for low-carbon power generation as they do not release greenhouse gases into the atmosphere as they generate electricity. The power output of the solar PV module and CSP collector is proportional to the temperature and the amount of solar radiation received by their surface. Hence, the determination of the most convenient locations of PV and CSP systems is crucial to maximizing their output power. This study aims to provide a hands-on and plausible approach to the multi-criteria evaluation of site suitability of PV and CSP plants using a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP). Applying the GRI-based AHP approach is meant to specify the criteria and sub-criteria, to identify the unsuitable areas, the low-, moderate-, high- and very high suitable areas for each layer of GRI, to perform the pairwise comparison matrix at each level of the hierarchy structure based on experts' knowledge, and calculate the weights using AHP to create the final map of solar PV and CSP plants suitability in Morocco with a particular focus on the Dakhla city. The results recognize that solar irradiation is the main decision factor for the integration of these technologies on energy policy goals of Morocco but explicitly account for other factors that cannot only limit the potential of certain locations but can even exclude the Dakhla city classified as unsuitable area. We discuss the sensitivity of the PV and CSP site suitability to different aspects, such as the methodology, the climate conditions, and the technology used in each source, and provide the final recommendations to the Moroccan energy strategy by analyzing if actual Morocco's PV and CSP installations are located within areas deemed suitable and by discussing several cases to provide mutual benefits across the Food-Energy-Water nexus. The adapted methodology and conducted suitability map could be used by researchers or engineers to provide helpful information for decision-makers in terms of sites selection, design, and planning of future solar plants, especially in areas suffering from energy shortages, such as the Dakhla city, which is now one of Africa's most promising investment hubs and it is especially attractive to investors looking to root their operations in Africa and import to European markets.

Keywords: analytic hierarchy process, concentrated solar power, dakhla, geographic referenced information, Morocco, multi-criteria decision-making, photovoltaic, site suitability

Procedia PDF Downloads 173
669 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions

Authors: M. Tarik Boyraz, M. Bilge Imer

Abstract:

Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.

Keywords: heat treatment, IN738LC, simulations, super-alloys

Procedia PDF Downloads 248
668 Study of Interplanetary Transfer Trajectories via Vicinity of Libration Points

Authors: Zhe Xu, Jian Li, Lvping Li, Zezheng Dong

Abstract:

This work is to study an optimized transfer strategy of connecting Earth and Mars via the vicinity of libration points, which have been playing an increasingly important role in trajectory designing on a deep space mission, and can be used as an effective alternative solution for Earth-Mars direct transfer mission in some unusual cases. The use of vicinity of libration points of the sun-planet body system is becoming potential gateways for future interplanetary transfer missions. By adding fuel to cargo spaceships located in spaceports, the interplanetary round-trip exploration shuttle mission of such a system facility can also be a reusable transportation system. In addition, in some cases, when the S/C cruising through invariant manifolds, it can also save a large amount of fuel. Therefore, it is necessary to make an effort on looking for efficient transfer strategies using variant manifold about libration points. It was found that Earth L1/L2 Halo/Lyapunov orbits and Mars L2/L1 Halo/Lyapunov orbits could be connected with reasonable fuel consumption and flight duration with appropriate design. In the paper, the halo hopping method and coplanar circular method are briefly introduced. The former used differential corrections to systematically generate low ΔV transfer trajectories between interplanetary manifolds, while the latter discussed escape and capture trajectories to and from Halo orbits by using impulsive maneuvers at periapsis of the manifolds about libration points. In the following, designs of transfer strategies of the two methods are shown here. A comparative performance analysis of interplanetary transfer strategies of the two methods is carried out accordingly. Comparison of strategies is based on two main criteria: the total fuel consumption required to perform the transfer and the time of flight, as mentioned above. The numeric results showed that the coplanar circular method procedure has certain advantages in cost or duration. Finally, optimized transfer strategy with engineering constraints is searched out and examined to be an effective alternative solution for a given direct transfer mission. This paper investigated main methods and gave out an optimized solution in interplanetary transfer via the vicinity of libration points. Although most of Earth-Mars mission planners prefer to build up a direct transfer strategy for the mission due to its advantage in relatively short time of flight, the strategies given in the paper could still be regard as effective alternative solutions since the advantages mentioned above and longer departure window than direct transfer.

Keywords: circular restricted three-body problem, halo/Lyapunov orbit, invariant manifolds, libration points

Procedia PDF Downloads 244
667 E-Commerce Product Return Management Effects on Consumer Experience and Satisfaction: A Fast-Fashion Perspective

Authors: Nora Alomar, Bianca Alexandra Stefa, Saleh Bazi

Abstract:

This research uncovers the determinants that drive millennial consumers to adhere to product return of fast-fashion products purchases via e-commerce and what effects it has on consumer experience and satisfaction. Online consumption has skyrocketed, with e-commerce being the only, most reliable, and safe method of shopping during and post Covid-19. It has been noted customers are demanding a wide variety of product characteristics and a generous optimal return policy. The authors have selected to examine millennial consumers as they are digital natives and have an affinity for researching, reading product reviews, and shopping online, with a great spending power due to a higher disposable income in comparison to other generations. A multi-study approach is adopted, where study one (interviews, sample of 20 respondents) investigates the factors that drive product return, and study two (PLS-SEM, sample of 250 respondents) looks into the relationships of product return management against behavioral outcomes by having the generated factors (from study one) as moderators. Five themes are generated from study one (return policies, product characteristics, delivery lead time, seasonality, product trial & overspending). The authors identify that two out of the five factors (seasonality, product trial & overspending) have not been highlighted by the literature. The paper examines 11 hypotheses, where 10 are supported. Findings highlight the quality of the product return management influences the overall millennial customer experience and satisfaction. Findings also indicate that product return management was identified to have a significant negative effect on customer experience. Additionally, seasonality has a significant but negative moderation, which means increasing seasonality decreases the relationship between product return management and customer experience and satisfaction. Results highlight that return policies have a significant negative influence on the relationship between returning a product and customer experience and satisfaction. Moreover, product characteristics are also identified to have a significant negative influence on the relationship between returning a product and customer experience and satisfaction. This study further examines the influence of the factors on direct e-commerce websites and third-party e-commerce websites. Findings showcase a strong statistical significance for the increased rate of return of fast-fashion products on third-party websites. This paper aids practitioners in taking strategic decisions related to return management, to improve the quality of logistical services and, in turn, increase profitability.

Keywords: customer experience, customer satisfaction, e-commerce, fast-fashion, product returns

Procedia PDF Downloads 109
666 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum

Authors: Divya Palaniappan

Abstract:

Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.

Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum

Procedia PDF Downloads 362
665 Creative Mathematics – Action Research of a Professional Development Program in an Icelandic Compulsory School

Authors: Osk Dagsdottir

Abstract:

Background—Gait classifying allows clinicians to differentiate gait patterns into clinically important categories that help in clinical decision making. Reliable comparison of gait data between normal and patients requires knowledge of the gait parameters of normal children's specific age group. However, there is still a lack of the gait database for normal children of different ages. Objectives—This study aims to investigate the kinematics of the lower limb joints during gait for normal children in different age groups. Methods—Fifty-three normal children (34 boys, 19 girls) were recruited in this study. All the children were aged between 5 to 16 years old. Age groups were defined as three types: young child aged (5-7), child (8-11), and adolescent (12-16). When a participant agreed to take part in the project, their parents signed a consent form. Vicon® motion capture system was used to collect gait data. Participants were asked to walk at their comfortable speed along a 10-meter walkway. Each participant walked up to 20 trials. Three good trials were analyzed using the Vicon Plug-in-Gait model to obtain parameters of the gait, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Moreover, each gait cycle was divided into 8 phases. The range of motion (ROM) angle of pelvis, hip, knee, and ankle joints in three planes of both limbs were calculated using an in-house program. Results—The temporal-spatial variables of three age groups of normal children were compared between each other; it was found that there was a significant difference (p < 0.05) between the groups. The step length and walking speed were gradually increasing from young child to adolescent, while cadence was gradually decreasing from young child to adolescent group. The mean and standard deviation (SD) of the step length of young child, child and adolescent groups were 0.502 ± 0.067 m, 0.566 ± 0.061 m and 0.672 ± 0.053 m, respectively. The mean and SD of the cadence of the young child, child and adolescent groups were 140.11±15.79 step/min, 129±11.84 step/min, and a 115.96±6.47 step/min, respectively. Moreover, it was observed that there were significant differences in kinematic parameters, either whole gait cycle or each phase. For example, RoM of knee angle in the sagittal plane in the whole cycle of young child group is (65.03±0.52 deg) larger than child group (63.47±0.47 deg). Conclusion—Our result showed that there are significant differences between each age group in the gait phases and thus children walking performance changes with ages. Therefore, it is important for the clinician to consider the age group when analyzing the patients with lower limb disorders before any clinical treatment.

Keywords: action research, creative learning, mathematics education, professional development

Procedia PDF Downloads 108
664 Effects of Potential Chloride-Free Admixtures on Selected Mechanical Properties of Kenya Clay-Based Cement Mortars

Authors: Joseph Mwiti Marangu, Joseph Karanja Thiong'o, Jackson Muthengia Wachira

Abstract:

The mechanical performance of hydrated cements mortars mainly depends on its compressive strength and setting time. These properties are crucial in the construction industry. Pozzolana based cements are mostly characterized by low 28 day compressive strength and long setting times. These are some of the major impediments to their production and diverse uses despite numerous technological and environmental benefits associated with them. The study investigated the effects of potential chemical activators on calcined clay- Portland cement blends with an aim to achieve high early compressive strength and shorter setting times in cement mortar. In addition, standard consistency, soundness and insoluble residue of all cement categories was determined. The test cement was made by blending calcined clays with Ordinary Portland Cement (OPC) at replacement levels from 35 to 50 percent by mass of the OPC to make test cement labeled PCC for the purposes of this study. Mortar prisms measuring 40mmx40mmx160mm were prepared and cured in accordance with KS EAS 148-3:2000 standard. Solutions of Na2SO4, NaOH, Na2SiO3 and Na2CO3 containing 0.5- 2.5M were separately added during casting. Compressive strength was determined at 2rd, 7th, 28th and 90th day of curing. For comparison purposes, commercial Portland Pozzolana cement (PPC) and Ordinary Portland Cement (OPC) were also investigated without activators under similar conditions. X-Ray Florescence (XRF) was used for chemical analysis while X-Ray Diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FTIR) were used for mineralogical analysis of the test samples. The results indicated that addition of activators significantly increased the 2nd and 7th day compressive strength but minimal increase on the 28th and 90th day compressive strength. A relatively linear relationship was observed between compressive strength and concentration of activator solutions up to 28th of curing. Addition of the said activators significantly reduced both initial and final setting time. Standard consistency and soundness varied with increased amount of clay in the test cement and concentration of activators. Amount of insoluble residues increased with increased replacement of OPC with calcined clays. Mineralogical studies showed that N-A-S-H is formed in addition to C-S-H. In conclusion, the concentration of 2 molar for all activator solutions produced the optimum compressive strength and greatly reduced the setting times for all cement mortars.

Keywords: activators, admixture, cement, clay, pozzolana

Procedia PDF Downloads 261
663 Immersive Environment as an Occupant-Centric Tool for Architecture Criticism and Architectural Education

Authors: Golnoush Rostami, Farzam Kharvari

Abstract:

In recent years, developments in the field of architectural education have resulted in a shift from conventional teaching methods to alternative state-of-the-art approaches in teaching methods and strategies. Criticism in architecture has been a key player both in the profession and education, but it has been mostly offered by renowned individuals. Hence, not only students or other professionals but also critics themselves may not have the option to experience buildings and rely on available 2D materials, such as images and plans, that may not result in a holistic understanding and evaluation of buildings. On the other hand, immersive environments provide students and professionals the opportunity to experience buildings virtually and reflect their evaluation by experiencing rather than judging based on 2D materials. Therefore, the aim of this study is to compare the effect of experiencing buildings in immersive environments and 2D drawings, including images and plans, on architecture criticism and architectural education. As a result, three buildings that have parametric brick facades were studied through 2D materials and in Unreal Engine v. 24 as an immersive environment among 22 architecture students that were selected using convenient sampling and were divided into two equal groups using simple random sampling. This study used mixed methods, including quantitative and qualitative methods; the quantitative section was carried out by a questionnaire, and deep interviews were used for the qualitative section. A questionnaire was developed for measuring three constructs, including privacy regulation based on Altman’s theory, the sufficiency of illuminance levels in the building, and the visual status of the view (visually appealing views based on obstructions that may have been caused by facades). Furthermore, participants had the opportunity to reflect their understanding and evaluation of the buildings in individual interviews. Accordingly, the collected data from the questionnaires were analyzed using independent t-test and descriptive analyses in IBM SPSS Statistics v. 26, and interviews were analyzed using the content analysis method. The results of the interviews showed that the participants who experienced the buildings in the immersive environment were able to have a thorough and more precise evaluation of the buildings in comparison to those who studied them through 2D materials. Moreover, the analyses of the respondents’ questionnaires revealed that there were statistically significant differences between measured constructs among the two groups. The outcome of this study suggests that integrating immersive environments into the profession and architectural education as an effective and efficient tool for architecture criticism is vital since these environments allow users to have a holistic evaluation of buildings for vigorous and sound criticism.

Keywords: immersive environments, architecture criticism, architectural education, occupant-centric evaluation, pre-occupancy evaluation

Procedia PDF Downloads 134
662 Comparision of Neutrophil Response to Curvularia, Bipolaris and Aspergillus Species

Authors: Eszter J. Tóth, Alexandra Hoffmann, Csaba Vágvölgyi, Tamás Papp

Abstract:

Members of the genera Curvularia and Bipolaris are closely related melanin producing filamentous fungi; both of them have the teleomorph states in genus Cochliobolus. While Bipolaris species infect only plants and may cause serious agriculture damages, some Curvularia species was recovered from opportunistic human infections. The human pathogenic species typically cause phaeohyphomycoses, i.e. mould infections caused by melanised fungi, which can manifest as invasive mycoses with frequent involvement of the central nervous system in immunocompromised patients or as local infections (e.g. keratitis, sinusitis, and cutaneous lesions) in immunocompetent people. Although their plant-fungal interactions have been intensively studied, there is only little information available about the human pathogenic feature of these fungi. The aim of this study was to investigate the neutrophil granulocytes’ response to hyphal forms of Curvularia and Bipolaris in comparison with the response to Aspergillus. In the present study Curvularia lunata SZMC 23759 and Aspergillus fumigatus SZMC 23245 both isolated from human eye infection, and Bipolaris zeicola BRIP 19582b isolated from plant leaf were examined. Neutrophils were isolated from heparinised venous blood of healthy donors with dextran sedimentation followed by centrifugation over Ficoll and hypotonic lysis of erythrocytes. Viability and purity of the cells were checked with trypan blue and Wright staining, respectively. Infection of neutrophils was carried out with germinated conidia in a ratio of 5:1. Production of hydrogen peroxide, superoxide anion, and nitrogen monoxide was measured both intracellularly and extracellularly in response to the germinated spores with or without the supernatant and after serum treatment. ROS and NOS production of neutrophils in interaction with the three fungi were compared. It is already known that Aspergillus species induce ROS production of neutrophils only after serum treatment. Although, in case of C. lunata, serum opsonisation also induced an intensive production of reactive species, lower level of production was measured in the lack of serum as well. After interaction with the plant pathogenic B. zeicola, amount of reactive species found to be similar with and without serum treatment. The presence of germination supernatant decreased the reactive species production in case of each fungus. Interaction with Curvularia, Bipolaris and Aspergillus species induced different response of neutrophils. It seems that recognition of C. lunata and B. zeicola is independent of serum opsonisation, albeit it increases the level of the produced reactive species in response for C. lunata. The study was supported by the grant LP2016-8/2016.

Keywords: Curvularia, neutrophils, NOS, ROS, serum opsonisation

Procedia PDF Downloads 197
661 Hydration of Three-Piece K Peptide Fragments Studied by Means of Fourier Transform Infrared Spectroscopy

Authors: Marcin Stasiulewicz, Sebastian Filipkowski, Aneta Panuszko

Abstract:

Background: The hallmark of neurodegenerative diseases, including Alzheimer's and Parkinson's diseases, is an aggregation of the abnormal forms of peptides and proteins. Water is essential to functioning biomolecules, and it is one of the key factors influencing protein folding and misfolding. However, the hydration studies of proteins are complicated due to the complexity of protein systems. The use of model compounds can facilitate the interpretation of results involving larger systems. Objectives: The goal of the research was to characterize the properties of the hydration water surrounding the two three-residue K peptide fragments INS (Isoleucine - Asparagine - Serine) and NSR (Asparagine - Serine - Arginine). Methods: Fourier-transform infrared spectra of aqueous solutions of the tripeptides were recorded on Nicolet 8700 spectrometer (Thermo Electron Co.) Measurements were carried out at 25°C for varying molality of solute. To remove oscillation couplings from water spectra and, consequently, obtain narrow O-D semi-heavy water bands (HDO), the isotopic dilution method of HDO in H₂O was used. The difference spectra method allowed us to isolate the tripeptide-affected HDO spectrum. Results: The structural and energetic properties of water affected by the tripeptides were compared to the properties of pure water. The shift of the values of the gravity center of bands (related to the mean energy of water hydrogen bonds) towards lower values with respect to the ones corresponding to pure water suggests that the energy of hydrogen bonds between water molecules surrounding tripeptides is higher than in pure water. A comparison of the values of the mean oxygen-oxygen distances in water affected by tripeptides and pure water indicates that water-water hydrogen bonds are shorter in the presence of these tripeptides. The analysis of differences in oxygen-oxygen distance distributions between the tripeptide-affected water and pure water indicates that around the tripeptides, the contribution of water molecules with the mean energy of hydrogen bonds decreases, and simultaneously the contribution of strong hydrogen bonds increases. Conclusions: It was found that hydrogen bonds between water molecules in the hydration sphere of tripeptides are shorter and stronger than in pure water. It means that in the presence of the tested tripeptides, the structure of water is strengthened compared to pure water. Moreover, it has been shown that in the vicinity of the Asparagine - Serine - Arginine, water forms stronger and shorter hydrogen bonds. Acknowledgments: This work was funded by the National Science Centre, Poland (grant 2017/26/D/NZ1/00497).

Keywords: amyloids, K-peptide, hydration, FTIR spectroscopy

Procedia PDF Downloads 178
660 Adjustment of the Whole-Body Center of Mass during Trunk-Flexed Walking across Uneven Ground

Authors: Soran Aminiaghdam, Christian Rode, Reinhard Blickhan, Astrid Zech

Abstract:

Despite considerable studies on the impact of imposed trunk posture on human walking, less is known about such locomotion while negotiating changes in ground level. The aim of this study was to investigate the behavior of the VBCOM in response to a two-fold expected perturbation, namely alterations in body posture and in ground level. To this end, the kinematic data and ground reaction forces of twelve able participants were collected. We analyzed the vertical position of the body center of mass (VBCOM) from the ground determined by the body segmental analysis method relative to the laboratory coordinate system at touchdown and toe-off instants during walking across uneven ground — characterized by perturbation contact (a 10-cm visible drop) and pre- and post-perturbation contacts — in comparison to unperturbed level contact while maintaining three postures (regular erect, ~30° and ~50° of trunk flexion from the vertical). The VBCOM was normalized to the distance between the greater trochanter marker and the lateral malleoli marker at the instant of TD. Moreover, we calculated the backward rotation during step-down as the difference of the maximum of the trunk angle in the pre-perturbation contact and the minimal trunk angle in the perturbation contact. Two-way repeated measures ANOVAs revealed contact-specific effects of posture on the VBCOM at touchdown (F = 5.96, p = 0.00). As indicated by the analysis of simple main effects, during unperturbed level and pre-perturbation contacts, no between-posture differences for the VBCOM at touchdown were found. In the perturbation contact, trunk-flexed gaits showed a significant increase of VBCOM as compared to the pre-perturbation contact. In the post-perturbation contact, the VBCOM demonstrated a significant decrease in all gait postures relative to the preceding corresponding contacts with no between-posture differences. Main effects of posture revealed that the VBCOM at toe-off significantly decreased in trunk-flexed gaits relative to the regular erect gait. For the main effect of contact, the VBCOM at toe-off demonstrated changes across perturbation and post-perturbation contacts as compared to the unperturbed level contact. Furthermore, participants exhibited a backward trunk rotation during step-down possibly to control the angular momentum of their whole body. A more pronounced backward trunk rotation (2- to 3-fold compared with level contacts) in trunk-flexed walking contributed to the observed elevated VBCOM during the step-down which may have facilitated drop negotiation. These results may shed light on the interaction between posture and locomotion in able gait, and specifically on the behavior of the body center of mass during perturbed locomotion.

Keywords: center of mass, perturbation, posture, uneven ground, walking

Procedia PDF Downloads 181