Search results for: energy performance certificate EPBD
784 Structural Molecular Dynamics Modelling of FH2 Domain of Formin DAAM
Authors: Rauan Sakenov, Peter Bukovics, Peter Gaszler, Veronika Tokacs-Kollar, Beata Bugyi
Abstract:
FH2 (formin homology-2) domains of several proteins, collectively known as formins, including DAAM, DAAM1 and mDia1, promote G-actin nucleation and elongation. FH2 domains of these formins exist as oligomers. Chain dimerization by ring structure formation serves as a structural basis for actin polymerization function of FH2 domain. Proper single chain configuration and specific interactions between its various regions are necessary for individual chains to form a dimer functional in G-actin nucleation and elongation. FH1 and WH2 domain-containing formins were shown to behave as intrinsically disordered proteins. Thus, the aim of this research was to study structural dynamics of FH2 domain of DAAM. To investigate structural features of FH2 domain of DAAM, molecular dynamics simulation of chain A of FH2 domain of DAAM solvated in water box in 50 mM NaCl was conducted at temperatures from 293.15 to 353.15K, with VMD 1.9.2, NAMD 2.14 and Amber Tools 21 using 2z6e and 1v9d PDB structures of DAAM was obtained on I-TASSER webserver. Calcium and ATP bound G-actin 3hbt PDB structure was used as a reference protein with well-described structural dynamics of denaturation. Topology and parameter information of CHARMM 2012 additive all-atom force fields for proteins, carbohydrate derivatives, water and ions were used in NAMD 2.14 and ff19SB force field for proteins in Amber Tools 21. The systems were energy minimized for the first 1000 steps, equilibrated and produced in NPT ensemble for 1ns using stochastic Langevin dynamics and the particle mesh Ewald method. Our root-mean square deviation (RMSD) analysis of molecular dynamics of chain A of FH2 domains of DAAM revealed similar insignificant changes of total molecular average RMSD values of FH2 domain of these formins at temperatures from 293.15 to 353.15K. In contrast, total molecular average RMSD values of G-actin showed considerable increase at 328K, which corresponds to the denaturation of G-actin molecule at this temperature and its transition from native, ordered, to denatured, disordered, state which is well-described in the literature. RMSD values of lasso and tail regions of chain A of FH2 domain of DAAM exhibited higher than total molecular average RMSD at temperatures from 293.15 to 353.15K. These regions are functional in intra- and interchain interactions and contain highly conserved tryptophan residues of lasso region, highly conserved GNYMN sequence of post region and amino acids of the shell of hydrophobic pocket of the salt bridge between Arg171 and Asp321, which are important for structural stability and ordered state of FH2 domain of DAAM and its functions in FH2 domain dimerization. In conclusion, higher than total molecular average RMSD values of lasso and post regions of chain A of FH2 domain of DAAM may explain disordered state of FH2 domain of DAAM at temperatures from 293.15 to 353.15K. Finally, absence of marked transition, in terms of significant changes in average molecular RMSD values between native and denatured states of FH2 domain of DAAM at temperatures from 293.15 to 353.15K, can make it possible to attribute these formins to the group of intrinsically disordered proteins rather than to the group of intrinsically ordered proteins such as G-actin.Keywords: FH2 domain, DAAM, formins, molecular modelling, computational biophysics
Procedia PDF Downloads 136783 Assessing Professionalism, Communication, and Collaboration among Emergency Physicians by Implementing a 360-Degree Evaluation
Authors: Ahmed Al Ansari, Khalid Al Khalifa
Abstract:
Objective: Multisource feedback (MSF), also called the 360-Degree evaluation is an evaluation process by which questionnaires are distributed amongst medical peers and colleagues to assess physician performance from different sources other than the attending or the supervising physicians. The aim of this study was to design, implement, and evaluate a 360-Degree process in assessing emergency physicians trainee in the Kingdom of Bahrain. Method: The study was undertaken in Bahrain Defense Force Hospital which is a military teaching hospital in the Kingdom of Bahrain. Thirty emergency physicians (who represent the total population of the emergency physicians in our hospital) were assessed in this study. We developed an instrument modified from the Physician achievement review instrument PAR which was used to assess Physician in Alberta. We focused in our instrument to assess professionalism, communication skills and collaboration only. To achieve face and content validity, table of specification was constructed and a working group was involved in constructing the instrument. Expert opinion was considered as well. The instrument consisted of 39 items; were 15 items to assess professionalism, 13 items to assess communication skills, and 11 items to assess collaboration. Each emergency physicians was evaluated with 3 groups of raters, 4 Medical colleague emergency physicians, 4 medical colleague who are considered referral physicians from different departments, and 4 Coworkers from the emergency department. Independent administrative team was formed to carry on the responsibility of distributing the instruments and collecting them in closed envelopes. Each envelope was consisted of that instrument and a guide for the implementation of the MSF and the purpose of the study. Results: A total of 30 emergency physicians 16 males and 14 females who represent the total number of the emergency physicians in our hospital were assessed. The total collected forms is 269, were 105 surveys from coworkers working in emergency department, 93 surveys from medical colleague emergency physicians, and 116 surveys from referral physicians from different departments. The total mean response rates were 71.2%. The whole instrument was found to be suitable for factor analysis (KMO = 0.967; Bartlett test significant, p<0.00). Factor analysis showed that the data on the questionnaire decomposed into three factors which counted for 72.6% of the total variance: professionalism, collaboration, and communication. Reliability analysis indicated that the instrument full scale had high internal consistency (Cronbach’s α 0.98). The generalizability coefficients (Ep2) were 0.71 for the surveys. Conclusions: Based on the present results, the current instruments and procedures have high reliability, validity, and feasibility in assessing emergency physicians trainee in the emergency room.Keywords: MSF system, emergency, validity, generalizability
Procedia PDF Downloads 357782 Cultural Statistics in Governance: A Comparative Analysis between the UK and Finland
Authors: Sandra Toledo
Abstract:
There is an increasing tendency in governments for a more evidence-based policy-making and a stricter auditing of public spheres. Especially when budgets are tight, and taxpayers demand a bigger scrutiny over the use of the available resources, statistics and numbers appeared as an effective tool to produce data that supports investments done, as well as evaluating public policy performance. This pressure has not exempted the cultural and art fields. Finland like the rest of Nordic countries has kept its principles from the welfare state, whilst UK seems to be going towards the opposite direction, relaying more and more in private sectors and foundations, as the state folds back. The boom of the creative industries along with a managerial trend introduced by Tatcher in the UK brought, as a result, a commodification of arts within a market logic, where sponsorship and commercial viability were the keynotes. Finland on its part, in spite of following a more protectionist approach of arts, seems to be heading in a similar direction. Additionally, there is an international growing interest in the application of cultural participation studies and the comparability between countries in their results. Nonetheless, the standardization in the application of cultural surveys has not happened yet. Not only there are differences in the application of these type of surveys in terms of time and frequency, but also regarding those conducting them. Therefore, one hypothesis considered in this research is that behind the differences between countries in the application of cultural surveys, production and utilization of cultural statistics is the cultural policy model adopted by the government. In other words, the main goal of this research is to answer the following: What are the differences and similarities between Finland and the UK regarding the role cultural surveys have in cultural policy making? Along with other secondary questions such as: How does the cultural policy model followed by each country influence the role of cultural surveys in cultural policy making? and what are the differences at the local level? In order to answer these questions, strategic cultural policy documents and interviews with key informants will be used and analyzed as source data, using content analysis methods. Cultural statistics per se will not be compared, but instead their use as instruments of governing, and its relation to the cultural policy model. Aspects such as execution of cultural surveys, funding, periodicity, and use of statistics in formal reports and publications, will be studied in the written documents while in the interviews other elements such as perceptions from those involved in collecting cultural statistics or policy making, distribution of tasks and hierarchies among cultural and statistical institutions, and a general view will be the target. A limitation identified beforehand and that it is expected to encounter throughout the process is the language barrier in the case of Finland when it comes to official documents, which will be tackled by interviewing the authors of such papers and choosing key extract of them for translation.Keywords: Finland, cultural statistics, cultural surveys, United Kingdom
Procedia PDF Downloads 236781 Governance in the Age of Artificial intelligence and E- Government
Authors: Mernoosh Abouzari, Shahrokh Sahraei
Abstract:
Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.Keywords: electronic government, artificial intelligence, information and communication technology., system
Procedia PDF Downloads 96780 Maternal Exposure to Bisphenol A and Its Association with Birth Outcomes
Authors: Yi-Ting Chen, Yu-Fang Huang, Pei-Wei Wang, Hai-Wei Liang, Chun-Hao Lai, Mei-Lien Chen
Abstract:
Background: Bisphenol A (BPA) is commonly used in consumer products, such as inner coatings of cans and polycarbonated bottles. BPA is considered to be an endocrine disrupting substance (EDs) that affects normal human hormones and may cause adverse effects on human health. Pregnant women and fetuses are susceptible groups of endocrine disrupting substances. Prenatal exposure to BPA has been shown to affect the fetus through the placenta. Therefore, it is important to evaluate the potential health risk of fetal exposure to BPA during pregnancy. The aims of this study were (1) to determine the urinary concentration of BPA in pregnant women, and (2) to investigate the association between BPA exposure during pregnancy and birth outcomes. Methods: This study recruited 117 pregnant women and their fetuses from 2012 to 2014 from the Taiwan Maternal- Infant Cohort Study (TMICS). Maternal urine samples were collected in the third trimester and questionnaires were used to collect socio-demographic characteristics, eating habits and medical conditions of the participants. Information about birth outcomes of the fetus was obtained from medical records. As for chemicals analysis, BPA concentrations in urine were determined by off-line solid-phase extraction-ultra-performance liquid chromatography coupled with a Q-Tof mass spectrometer. The urinary concentrations were adjusted with creatinine. The association between maternal concentrations of BPA and birth outcomes was estimated using the logistic regression model. Results: The detection rate of BPA is 99%; the concentration ranges (μg/g) from 0.16 to 46.90. The mean (SD) BPA levels are 5.37(6.42) μg/g creatinine. The mean ±SD of the body weight, body length, head circumference, chest circumference and gestational age at birth are 3105.18 ± 339.53 g, 49.33 ± 1.90 cm, 34.16 ± 1.06 cm, 32.34 ± 1.37 cm and 38.58 ± 1.37 weeks, respectively. After stratifying the exposure levels into two groups by median, pregnant women in higher exposure group would have an increased risk of lower body weight (OR=0.57, 95%CI=0.271-1.193), smaller chest circumference (OR=0.70, 95%CI=0.335-1.47) and shorter gestational age at birth newborn (OR=0.46, 95%CI=0.191-1.114). However, there are no associations between BPA concentration and birth outcomes reach a significant level (p < 0.05) in statistics. Conclusions: This study presents prenatal BPA profiles and infants in northern Taiwan. Women who have higher BPA concentrations tend to give birth to lower body weight, smaller chest circumference or shorter gestational age at birth newborn. More data will be included to verify the results. This report will also present the predictors of BPA concentrations for pregnant women.Keywords: bisphenol A, birth outcomes, biomonitoring, prenatal exposure
Procedia PDF Downloads 144779 Metadiscourse in EFL, ESP and Subject-Teaching Online Courses in Higher Education
Authors: Maria Antonietta Marongiu
Abstract:
Propositional information in discourse is made coherent, intelligible, and persuasive through metadiscourse. The linguistic and rhetorical choices that writers/speakers make to organize and negotiate content matter are intended to help relate a text to its context. Besides, they help the audience to connect to and interpret a text according to the values of a specific discourse community. Based on these assumptions, this work aims to analyse the use of metadiscourse in the spoken performance of teachers in online EFL, ESP, and subject-teacher courses taught in English to non-native learners in higher education. In point of fact, the global spread of Covid 19 has forced universities to transition their in-class courses to online delivery. This has inevitably placed on the instructor a heavier interactional responsibility compared to in-class courses. Accordingly, online delivery needs greater structuring as regards establishing the reader/listener’s resources for text understanding and negotiating. Indeed, in online as well as in in-class courses, lessons are social acts which take place in contexts where interlocutors, as members of a community, affect the ways ideas are presented and understood. Following Hyland’s Interactional Model of Metadiscourse (2005), this study intends to investigate Teacher Talk in online academic courses during the Covid 19 lock-down in Italy. The selected corpus includes the transcripts of online EFL and ESP courses and subject-teachers online courses taught in English. The objective of the investigation is, firstly, to ascertain the presence of metadiscourse in the form of interactive devices (to guide the listener through the text) and interactional features (to involve the listener in the subject). Previous research on metadiscourse in academic discourse, in college students' presentations in EAP (English for Academic Purposes) lessons, as well as in online teaching methodology courses and MOOC (Massive Open Online Courses) has shown that instructors use a vast array of metadiscoursal features intended to express the speakers’ intentions and standing with respect to discourse. Besides, they tend to use directions to orient their listeners and logical connectors referring to the structure of the text. Accordingly, the purpose of the investigation is also to find out whether metadiscourse is used as a rhetorical strategy by instructors to control, evaluate and negotiate the impact of the ongoing talk, and eventually to signal their attitudes towards the content and the audience. Thus, the use of metadiscourse can contribute to the informative and persuasive impact of discourse, and to the effectiveness of online communication, especially in learning contexts.Keywords: discourse analysis, metadiscourse, online EFL and ESP teaching, rhetoric
Procedia PDF Downloads 129778 Health Risk Assessment from Potable Water Containing Tritium and Heavy Metals
Authors: Olga A. Momot, Boris I. Synzynys, Alla A. Oudalova
Abstract:
Obninsk is situated in the Kaluga region 100 km southwest of Moscow on the left bank of the Protva River. Several enterprises utilizing nuclear energy are operating in the town. A special attention in the region where radiation-hazardous facilities are located has traditionally been paid to radioactive gas and aerosol releases into the atmosphere; liquid waste discharges into the Protva river and groundwater pollution. Municipal intakes involve 34 wells arranged 15 km apart in a sequence north-south along the foot of the left slope of the Protva river valley. Northern and southern water intakes are upstream and downstream of the town, respectively. They belong to river valley intakes with mixed feeding, i.e. precipitation infiltration is responsible for a smaller part of groundwater, and a greater amount is being formed by overflowing from Protva. Water intakes are maintained by the Protva river runoff, the volume of which depends on the precipitation fallen out and watershed area. Groundwater contamination with tritium was first detected in a sanitary-protective zone of the Institute of Physics and Power Engineering (SRC-IPPE) by Roshydromet researchers when realizing the “Program of radiological monitoring in the territory of nuclear industry enterprises”. A comprehensive survey of the SRC-IPPE’s industrial site and adjacent territories has revealed that research nuclear reactors and accelerators where tritium targets are applied as well as radioactive waste storages could be considered as potential sources of technogenic tritium. All the above sources are located within the sanitary controlled area of intakes. Tritium activity in water of springs and wells near the SRC-IPPE is about 17.4 – 3200 Bq/l. The observed values of tritium activity are below the intervention levels (7600 Bq/l for inorganic compounds and 3300 Bq/l for organically bound tritium). The risk has being assessed to estimate possible effect of considered tritium concentrations on human health. Data on tritium concentrations in pipe-line drinking water were used for calculations. The activity of 3H amounted to 10.6 Bq/l and corresponded to the risk of such water consumption of ~ 3·10-7 year-1. The risk value given in magnitude is close to the individual annual death risk for population living near a NPP – 1.6·10-8 year-1 and at the same time corresponds to the level of tolerable risk (10-6) and falls within “risk optimization”, i.e. in the sphere for planning the economically sound measures on exposure risk reduction. To estimate the chemical risk, physical and chemical analysis was made of waters from all springs and wells near the SRC-IPPE. Chemical risk from groundwater contamination was estimated according to the EPA US guidance. The risk of carcinogenic diseases at a drinking water consumption amounts to 5·10-5. According to the classification accepted the health risk in case of spring water consumption is inadmissible. The compared assessments of risk associated with tritium exposure, on the one hand, and the dangerous chemical (e.g. heavy metals) contamination of Obninsk drinking water, on the other hand, have confirmed that just these chemical pollutants are responsible for health risk.Keywords: radiation-hazardous facilities, water intakes, tritium, heavy metal, health risk
Procedia PDF Downloads 240777 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 216776 Evaluation of Existing Wheat Genotypes of Bangladesh in Response to Salinity
Authors: Jahangir Alam, Ayman El Sabagh, Kamrul Hasan, Shafiqul Islam Sikdar, Celaleddin Barutçular, Sohidul Islam
Abstract:
The experiment (Germination test and seedling growth) was carried out at the laboratory of Agronomy Department, Hajee Mohammad Danesh Science and Technology University (HSTU), Dinajpur, Bangladesh during January 2014. Germination and seedling growth of 22 existing wheat genotypes in Bangladesh viz. Kheri, Kalyansona, Sonora, Sonalika, Pavon, Kanchan, Akbar, Barkat, Aghrani, Prativa, Sourab, Gourab, Shatabdi, Sufi, Bijoy, Prodip, BARI Gom 25, BARI Gom 26, BARI Gom 27, BARI Gom 28, Durum and Triticale were tested with three salinity levels (0, 100 and 200 mM NaCl) for 10 days in sand culture in small plastic pot. Speed of germination as expressed by germination percentage (GP), rate of germination (GR), germination coefficient (GC) and germination vigor index (GVI) of all wheat genotypes was delayed and germination percentage was reduced due to salinization compared to control. The lower reduction of GP, GR, GC and VI due to salinity was observed in BARI Gom 25, BARI Gom 27, Shatabdi, Sonora, and Akbbar and higher reduction was recorded in BARI Gom 26, Duram, Triticale, Sufi and Kheri. Shoot and root lengths, fresh and dry weights were found to be affected due to salinization and shoot was more affected than root. Under saline conditions, longer shoot and root length were recorded in BARI Gom 25, BARI Gom 27, Akbar, and Shatabdi, i.e. less reduction of shoot and root lengths was observed while, BARI Gom 26, Duram, Prodip and Triticale produced shorted shoot and root lengths. In this study, genotypes BARI Gom 25, BARI Gom 27, Shatabdi, Sonora and Aghrani showed better performance in terms shoot and root growth (fresh and dry weights) and proved to be tolerant genotypes to salinity. On the other hand, Duram, BARI Gom 26, Triticale, Kheri and Prodip affected seriously in terms of fresh and dry weights by the saline environment. BARI Gom 25, BARI Gom 27, Shatabdi, Sonora and Aghrani showed more salt tolerance index (STI) based on shoot dry weight while, BARI Gom 26, Triticale, Durum, Sufi, Prodip and Kalyanson demonstrate lower STI value under saline conditions. Based on the most salt tolerance and susceptible trait, genotypes under 100 and 200 mM NaCl stresses can be arranged as salt tolerance genotypes: BARI Gom 25> BARI Gom 27> Shatabdi> Sonora, and salt susceptible genotypes: BARI Gom 26> Durum> Triticale> Prodip> Sufi> Kheri. Considering the experiment, it can be concluded that the BARI Gom 25 may be treated as the most salt tolerant and BARI Gom 26 as the most salt sensitive genotypes in Bangladesh.Keywords: genotypes, germination, salinity, wheat
Procedia PDF Downloads 308775 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 93774 Study of Interplanetary Transfer Trajectories via Vicinity of Libration Points
Authors: Zhe Xu, Jian Li, Lvping Li, Zezheng Dong
Abstract:
This work is to study an optimized transfer strategy of connecting Earth and Mars via the vicinity of libration points, which have been playing an increasingly important role in trajectory designing on a deep space mission, and can be used as an effective alternative solution for Earth-Mars direct transfer mission in some unusual cases. The use of vicinity of libration points of the sun-planet body system is becoming potential gateways for future interplanetary transfer missions. By adding fuel to cargo spaceships located in spaceports, the interplanetary round-trip exploration shuttle mission of such a system facility can also be a reusable transportation system. In addition, in some cases, when the S/C cruising through invariant manifolds, it can also save a large amount of fuel. Therefore, it is necessary to make an effort on looking for efficient transfer strategies using variant manifold about libration points. It was found that Earth L1/L2 Halo/Lyapunov orbits and Mars L2/L1 Halo/Lyapunov orbits could be connected with reasonable fuel consumption and flight duration with appropriate design. In the paper, the halo hopping method and coplanar circular method are briefly introduced. The former used differential corrections to systematically generate low ΔV transfer trajectories between interplanetary manifolds, while the latter discussed escape and capture trajectories to and from Halo orbits by using impulsive maneuvers at periapsis of the manifolds about libration points. In the following, designs of transfer strategies of the two methods are shown here. A comparative performance analysis of interplanetary transfer strategies of the two methods is carried out accordingly. Comparison of strategies is based on two main criteria: the total fuel consumption required to perform the transfer and the time of flight, as mentioned above. The numeric results showed that the coplanar circular method procedure has certain advantages in cost or duration. Finally, optimized transfer strategy with engineering constraints is searched out and examined to be an effective alternative solution for a given direct transfer mission. This paper investigated main methods and gave out an optimized solution in interplanetary transfer via the vicinity of libration points. Although most of Earth-Mars mission planners prefer to build up a direct transfer strategy for the mission due to its advantage in relatively short time of flight, the strategies given in the paper could still be regard as effective alternative solutions since the advantages mentioned above and longer departure window than direct transfer.Keywords: circular restricted three-body problem, halo/Lyapunov orbit, invariant manifolds, libration points
Procedia PDF Downloads 245773 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 402772 Working Conditions and Occupational Health: Analyzing the Stressing Factors in Outsourced Employees
Authors: Cledinaldo A. Dias, Isabela C. Santos, Marcus V. S. Siqueira
Abstract:
In the contemporary globalization, the competitiveness generated in the search of new markets aiming at the growth of productivity and, consequently, of profits, implies the redefinition of productive processes and new forms of work organization. As a result of this structuring, unemployment, labor force turnover and the increase in outsourcing and informal work occur. Considering the different relationships and working conditions of outsourced employees, this study aims to identify the most present stressors among outsourced service providers from a Federal Institution of Higher Education in Brazil. To reach this objective, a descriptive exploratory study with a quantitative approach was carried out. The qualitative approach was chosen to provide an in-depth analysis of the occupational conditions of outsourced workers since this method seeks to focus on the social as a world of investigated meanings and the language or speech of each subject as the object of this approach. The survey was conducted in the city of Montes Claros - Minas Gerais (Brazil) and involved eighty workers from companies hired by the institution, including armed security guards, porters, cleaners, drivers, gardeners, and administrative assistants. The choice of professionals obeyed non-probabilistic criteria for convenience or accessibility. Data collection was performed by means of a structured questionnaire composed of sixty questions, in a Likert-type frequency interval scale format, in order to identify potential organizational stressors. The results obtained evidence that the stress factors pointed out by the workers are, in most cases, a determining factor due to the low productive performance at work. Amongst the factors associated with stress, the ones that stood out most were those related to organizational communication failures, the incentive to competition, lack of expectations of professional growth, insecurity and job instability. Based on the results, the need for greater concern and organizational responsibility with the well-being and mental health of the outsourced worker and the recognition of their physical and psychological limitations, and care that goes beyond the functional capacity for the work. Specifically for the preservation of mental health, physical and quality of life, it is concluded that it is necessary for the professional to be inserted in the external world that favors it internally since this set is complemented so that the individual remains in balance and obtain satisfaction in your work.Keywords: occupational health, outsourced, organizational studies, stressors
Procedia PDF Downloads 106771 Creative Mathematics – Action Research of a Professional Development Program in an Icelandic Compulsory School
Authors: Osk Dagsdottir
Abstract:
Background—Gait classifying allows clinicians to differentiate gait patterns into clinically important categories that help in clinical decision making. Reliable comparison of gait data between normal and patients requires knowledge of the gait parameters of normal children's specific age group. However, there is still a lack of the gait database for normal children of different ages. Objectives—This study aims to investigate the kinematics of the lower limb joints during gait for normal children in different age groups. Methods—Fifty-three normal children (34 boys, 19 girls) were recruited in this study. All the children were aged between 5 to 16 years old. Age groups were defined as three types: young child aged (5-7), child (8-11), and adolescent (12-16). When a participant agreed to take part in the project, their parents signed a consent form. Vicon® motion capture system was used to collect gait data. Participants were asked to walk at their comfortable speed along a 10-meter walkway. Each participant walked up to 20 trials. Three good trials were analyzed using the Vicon Plug-in-Gait model to obtain parameters of the gait, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Moreover, each gait cycle was divided into 8 phases. The range of motion (ROM) angle of pelvis, hip, knee, and ankle joints in three planes of both limbs were calculated using an in-house program. Results—The temporal-spatial variables of three age groups of normal children were compared between each other; it was found that there was a significant difference (p < 0.05) between the groups. The step length and walking speed were gradually increasing from young child to adolescent, while cadence was gradually decreasing from young child to adolescent group. The mean and standard deviation (SD) of the step length of young child, child and adolescent groups were 0.502 ± 0.067 m, 0.566 ± 0.061 m and 0.672 ± 0.053 m, respectively. The mean and SD of the cadence of the young child, child and adolescent groups were 140.11±15.79 step/min, 129±11.84 step/min, and a 115.96±6.47 step/min, respectively. Moreover, it was observed that there were significant differences in kinematic parameters, either whole gait cycle or each phase. For example, RoM of knee angle in the sagittal plane in the whole cycle of young child group is (65.03±0.52 deg) larger than child group (63.47±0.47 deg). Conclusion—Our result showed that there are significant differences between each age group in the gait phases and thus children walking performance changes with ages. Therefore, it is important for the clinician to consider the age group when analyzing the patients with lower limb disorders before any clinical treatment.Keywords: action research, creative learning, mathematics education, professional development
Procedia PDF Downloads 110770 Effects of Potential Chloride-Free Admixtures on Selected Mechanical Properties of Kenya Clay-Based Cement Mortars
Authors: Joseph Mwiti Marangu, Joseph Karanja Thiong'o, Jackson Muthengia Wachira
Abstract:
The mechanical performance of hydrated cements mortars mainly depends on its compressive strength and setting time. These properties are crucial in the construction industry. Pozzolana based cements are mostly characterized by low 28 day compressive strength and long setting times. These are some of the major impediments to their production and diverse uses despite numerous technological and environmental benefits associated with them. The study investigated the effects of potential chemical activators on calcined clay- Portland cement blends with an aim to achieve high early compressive strength and shorter setting times in cement mortar. In addition, standard consistency, soundness and insoluble residue of all cement categories was determined. The test cement was made by blending calcined clays with Ordinary Portland Cement (OPC) at replacement levels from 35 to 50 percent by mass of the OPC to make test cement labeled PCC for the purposes of this study. Mortar prisms measuring 40mmx40mmx160mm were prepared and cured in accordance with KS EAS 148-3:2000 standard. Solutions of Na2SO4, NaOH, Na2SiO3 and Na2CO3 containing 0.5- 2.5M were separately added during casting. Compressive strength was determined at 2rd, 7th, 28th and 90th day of curing. For comparison purposes, commercial Portland Pozzolana cement (PPC) and Ordinary Portland Cement (OPC) were also investigated without activators under similar conditions. X-Ray Florescence (XRF) was used for chemical analysis while X-Ray Diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FTIR) were used for mineralogical analysis of the test samples. The results indicated that addition of activators significantly increased the 2nd and 7th day compressive strength but minimal increase on the 28th and 90th day compressive strength. A relatively linear relationship was observed between compressive strength and concentration of activator solutions up to 28th of curing. Addition of the said activators significantly reduced both initial and final setting time. Standard consistency and soundness varied with increased amount of clay in the test cement and concentration of activators. Amount of insoluble residues increased with increased replacement of OPC with calcined clays. Mineralogical studies showed that N-A-S-H is formed in addition to C-S-H. In conclusion, the concentration of 2 molar for all activator solutions produced the optimum compressive strength and greatly reduced the setting times for all cement mortars.Keywords: activators, admixture, cement, clay, pozzolana
Procedia PDF Downloads 265769 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 234768 Topographic Coast Monitoring Using UAV Photogrammetry: A Case Study in Port of Veracruz Expansion Project
Authors: Francisco Liaño-Carrera, Jorge Enrique Baños-Illana, Arturo Gómez-Barrero, José Isaac Ramírez-Macías, Erik Omar Paredes-JuáRez, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga
Abstract:
Topographical changes in coastal areas are usually assessed with airborne LIDAR and conventional photogrammetry. In recent times Unmanned Aerial Vehicles (UAV) have been used several in photogrammetric applications including coastline evolution. However, its use goes further by using the points cloud associated to generate beach Digital Elevation Models (DEM). We present a methodology for monitoring coastal topographic changes along a 50 km coastline in Veracruz, Mexico using high-resolution images (less than 10 cm ground resolution) and dense points cloud captured with an UAV. This monitoring develops in the context of the port of Veracruz expansion project which construction began in 2015 and intends to characterize coast evolution and prevent and mitigate project impacts on coastal environments. The monitoring began with a historical coastline reconstruction since 1979 to 2015 using aerial photography and Landsat imagery. We could define some patterns: the northern part of the study area showed accretion while the southern part of the study area showed erosion. Since the study area is located off the port of Veracruz, a touristic and economical Mexican urban city, where coastal development structures have been built since 1979 in a continuous way, the local beaches of the touristic area are been refilled constantly. Those areas were not described as accretion since every month sand-filled trucks refill the sand beaches located in front of the hotel area. The construction of marinas and the comitial port of Veracruz, the old and the new expansion were made in the erosion part of the area. Northward from the City of Veracruz the beaches were described as accretion areas while southward from the city, the beaches were described as erosion areas. One of the problems is the expansion of the new development in the southern area of the city using the beach view as an incentive to buy front beach houses. We assessed coastal changes between seasons using high-resolution images and also points clouds during 2016 and preliminary results confirm that UAVs can be used in permanent coast monitoring programs with excellent performance and detail.Keywords: digital elevation model, high-resolution images, topographic coast monitoring, unmanned aerial vehicle
Procedia PDF Downloads 270767 Musculoskeletal Disorders among Employees of an Assembly Industrial Workshop: Biomechanical Constrain’s Semi-Quantitative Analysis
Authors: Lamia Bouzgarrou, Amira Omrane, Haithem Kalel, Salma Kammoun
Abstract:
Background: During recent decades, mechanical and electrical industrial sector has greatly expanded with a significant employability potential. However, this sector faces the increasing prevalence of musculoskeletal disorders with heavy consequences associated with direct and indirect costs. Objective: The current intervention was motivated by large musculoskeletal upper limbs and back disorders frequency among the operators of an assembly workshop in a leader company specialized in sanitary equipment and water and gas connections. We aimed to identify biomechanical constraints among these operators through activity and biomechanical exposures semi-quantitative analysis based on video recordings and MUSKA-TMS software. Methods: We conducted, open observations and exploratory interviews at first, in order to overall understand work situation. Then, we analyzed operator’s activity through systematic observations and interviews. Finally, we conducted a semi-quantitative biomechanical constraints analysis with MUSKA-TMS software after representative activity period video recording. The assessment of biomechanical constrains was based on different criteria; biomechanical characteristics (work positions), aggravating factor (cold, vibration, stress, etc.) and exposure time (duration and frequency of solicitations, recovery phase); with a synthetic score of risk level variable from 1 to 4 (1: low risk of developing MSD and 4: high risk). Results: Semi-quantitative analysis objective many elementary operations with higher biomechanical constrains like high repetitiveness, insufficient recovery time and constraining angulation of shoulders, wrists and cervical spine. Among these risky elementary operations we sited the assembly of sleeve with the body, the assembly of axis, and the control on testing table of gas valves. Transformation of work situations were recommended, covering both the redevelopment of industrial areas and the integration of new tools and equipment of mechanical handling that reduces operator exposure to vibration. Conclusion: Musculoskeletal disorders are complex and costly disorders. Moreover, an approach centered on the observation of the work can promote the interdisciplinary dialogue and exchange between actors with the objective to maximize the performance of a company and improve the quality of life of operators.Keywords: musculoskeletal disorders, biomechanical constrains, semi-quantitative analysis, ergonomics
Procedia PDF Downloads 162766 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit
Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili
Abstract:
Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain
Procedia PDF Downloads 177765 Rheological and Sensory Attributes of Dough and Crackers Including Amaranth Flour (Amaranthus spp.)
Authors: Claudia Cabezas-Zabala, Jairo Lindarte-Artunduaga, Carlos Mario Zuluaga-Dominguez
Abstract:
Amaranth is an emerging pseudocereal rich in such essential nutrients as protein and dietary fiber, which was employed as an ingredient in the formulation of crackers to evaluate the rheological performance and sensory acceptability of the obtained food. A completely randomized factorial design was used with two factors: (A) ratio of wheat and amaranth flour used in the preparation of the dough, in proportion 90:10 and 80:20 (% w/w) and (B) two levels of inulin addition of 8.4% and 16.7 %, having two control doughs made from amaranth and wheat flour, respectively. Initially, the functional properties of the formulations mentioned were measured, showing no significant differences in the water absorption capacity (WAC) and swelling power (SP), having mean values between 1.66 and 1.81 g/g for WAC and between 1.75 and 1.86 g/g for SP, respectively. The amaranth flour had the highest water holding capacity (WHR) of 8.41 ± 0.15 g/g and emulsifying activity (EA) of 74.63 ± 1.89 g/g. Moreover, the rheological behavior, measured through the use of farinograph, extensograph, Mixolab, and falling index, showed that the formulation containing 20% of amaranth flour and 7.16% of inulin had a rheological behavior similar to the control produced exclusively with wheat flour, being the former, the one selected for the preparation of crackers. For this formulation, the farinograph showed a mixing tolerance index of 11 UB, indicating a strong and cohesive dough; likewise, the Mixolab showed dough reaches stability at 6.47 min, indicating a good resistance to mixing. On the other hand, the extensograph exhibited a dough resistance of 637 UB, as well as extensibility of 13.4 mm, which corresponds to a strong dough capable of resisting the laminate. Finally, the falling index was 318 s, which indicates the crumb will retain enough air to enhance the crispness of a characteristic cracker. Finally, a sensory consumer test did not show significant differences in the evaluation of aroma between the control and the selected formulation, while this latter had a significantly lower rating in flavor. However, a purchase intention of 70 % was observed among the population surveyed. The results obtained in this work give perspectives for the industrial use of amaranth in baked goods. Additionally, amaranth has been a product typically linked to indigenous populations in the Andean South American countries; therefore, the search for diversification and alternatives of use for this pseudocereal has an impact on the social and economic conditions of such communities. The technological versatility and nutritional quality of amaranth is an advantage for consumers, favoring the consumption of healthy products with important contributions of dietary fiber and protein.Keywords: amaranth, crackers, rheology, pseudocereals, kneaded products
Procedia PDF Downloads 120764 Dose Profiler: A Tracking Device for Online Range Monitoring in Particle Therapy
Authors: G. Battistoni, F. Collamati, E. De Lucia, R. Faccini, C. Mancini-Terracciano, M. Marafini, I. Mattei, S. Muraro, V. Patera, A. Sarti, A. Sciubba, E. Solfaroli Camillocci, M. Toppi, G. Traini, S. M. Valle, C. Voena
Abstract:
Accelerated charged particles, mainly protons and carbon ions, are presently used in Particle Therapy (PT) to treat solid tumors. The precision of PT exploiting the charged particle high localized dose deposition in tissues and biological effectiveness in killing cancer cells demands for an online dose monitoring technique, crucial to improve the quality assurance of treatments: possible patient mis-positionings and biological changes with respect to the CT scan could negatively affect the therapy outcome. In PT the beam range confined in the irradiated target can be monitored thanks to the secondary radiation produced by the interaction of the projectiles with the patient tissue. The Dose Profiler (DP) is a novel device designed to track charged secondary particles and reconstruct their longitudinal emission distribution, correlated to the Bragg peak position. The feasibility of this approach has been demonstrated by dedicated experimental measurements. The DP has been developed in the framework of the INSIDE project, MIUR, INFN and Centro Fermi, Museo Storico della Fisica e Centro Studi e Ricerche 'E. Fermi', Roma, Italy and will be tested at the Proton Therapy center of Trento (Italy) within the end of 2017. The DP combines a tracker, made of six layers of two-view scintillating fibers with square cross section (0.5 x 0.5 mm2) with two layers of two-view scintillating bars (section 12.0 x 0.6 mm2). The electronic readout is performed by silicon photomultipliers. The sensitive area of the tracking planes is 20 x 20 cm2. To optimize the detector layout, a Monte Carlo (MC) simulation based on the FLUKA code has been developed. The complete DP geometry and the track reconstruction code have been fully implemented in the MC. In this contribution, the DP hardware will be described. The expected detector performance computed using a dedicated simulation of a 220 MeV/u carbon ion beam impinging on a PMMA target will be presented, and the result will be discussed in the standard clinical application framework. A possible procedure for real-time beam range monitoring is proposed, following the expectations in actual clinical operation.Keywords: online range monitoring, particle therapy, quality assurance, tracking detector
Procedia PDF Downloads 240763 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 184762 Sentiment Analysis of Creative Tourism Experiences: The Case of Girona, Spain
Authors: Ariadna Gassiot, Raquel Camprubi, Lluis Coromina
Abstract:
Creative tourism involves the participation of tourists in the co-creation of their own experiences in a tourism destination. Consequently, creative tourists move from a passive behavior to an active behavior, and tourism destinations address this type of tourism by changing the scenario and making tourists learn and participate while they travel instead of merely offering tourism products and services to them. In creative tourism experiences, tourists are in close contact with locals and their culture. In destinations where culture (i.e. food, heritage, etc.) is the basis of their offer, such as Girona, Spain, tourism stakeholders must especially consider, analyze, and further foster the co-creation of authentic tourism experiences. They should focus on discovering more about these experiences, their main attributes, visitors’ opinions, etc. Creative tourists do not only participate while they travel around the world, but they also have and active post-travel behavior. They feel free to write about tourism experiences in different channels. User-generated content becomes crucial for any tourism destination when analyzing the market, making decisions, planning strategies, and when addressing issues, such as their reputation and performance. Sentiment analysis is a methodology used to automatically analyze semantic relationships and meanings in texts, so it is a way to extract tourists’ emotions and feelings. Tourists normally express their views and opinions regarding tourism products and services. They may express positive, neutral or negative feelings towards these products or services. For example, they may express anger, love, hate, sadness or joy towards tourism services and products. They may also express feelings through verbs, nouns, adverbs, adjectives, among others. Sentiment analysis may help tourism professionals in a range of areas, from marketing to customer service. For example, sentiment analysis allows tourism stakeholders to forecast tourism expenditure and tourist arrivals, or to analyze tourists’ profile. While there is an increasing presence of creativity in tourists’ experiences, there is also an increasing need to explore tourists’ expressions about these experiences. There is a need to know how they feel about participating in specific tourism activities. Thus, the main objective of this study is to analyze the meanings, emotions and feelings that tourists express about their creative experiences in Girona, Spain. To do so, sentiment analysis methodology is used. Results show the diversity of tourists who actively participate in tourism in Girona. Their opinions refer both to tangible aspects (e.g. food, museums, etc.) and to intangible aspects (e.g. friendliness, nightlife, etc.) of tourism experiences. Tourists express love, likeliness and other sentiments towards tourism products and services in Girona. This study can help tourism stakeholders in understanding tourists’ experiences and feelings. Consequently, they can offer more customized products and services and they can efficiently make them participate in the co-creation of their own tourism experiences.Keywords: creative tourism, sentiment analysis, text mining, user-generated content
Procedia PDF Downloads 180761 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach
Authors: Dominik Kowitzke
Abstract:
Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.Keywords: empty nests, environment, Germany, households, over housing
Procedia PDF Downloads 173760 Rupture in the Paradigm of the International Policy of Illicit Drugs in the Field of Public Health and within the Framework of the World Health Organization, 2001 to 2016
Authors: Emy Nayana Pinto, Denise Bomtempo Birche De Carvalho
Abstract:
In the present study, the harmful use of illicit drugs is seen as a public health problem and as one of the expressions of the social question, since its consequences fall mainly on the poorer classes of the population. This perspective is a counterpoint to the dominant paradigm on illicit drug policy at the global level, whose centrality lies within the criminal justice arena. The 'drug problem' is internationally combated through fragmented approaches that focus its actions on banning and criminalizing users. In this sense, the research seeks to answer the following key questions: What are the influences of the prohibitionism in the recommendations of the United Nations (UN), the World Health Organization (WHO), and the formulation of drug policies in member countries? What are the actors that have been provoking the prospect of breaking with the prohibitionist paradigm? What is the WHO contribution to the rupture with the prohibitionist paradigm and the displacement of the drug problem in the field of public health? The general objective of this work is to seek evidence from the perspective of rupture with the prohibitionist paradigm in the field of drugs policies at the global and regional level, through analysis of documents of the World Health Organization (WHO), between the years of 2001 to 2016. The research was carried out in bibliographical and documentary sources. The bibliographic sources contributed to the approach with the object and the theoretical basis of the research. The documentary sources served to answer the research questions and evidence the existence of the perspective of change in drug policy. Twenty-two documents of the UN system were consulted, of which fifteen had the contribution of the World Health Organization (WHO). In addition to the documents that directly relate to the subject of the research, documents from various agencies, programs, and offices, such as the Joint United Nations Program on HIV/AIDS (UNAIDS) and the United Nations Office on Drugs and Crime (UNODC), which also has drugs as the central or transversal theme of its performance. The results showed that from the 2000s it was possible to find in the literature review and in the documentary analysis evidence of the critique of the prohibitionist paradigm parallel to the construction of a new perspective for drug policy at the global level and the displacement of criminal justice approaches for the scope of public health, with the adoption of alternative and pragmatic interventions based on human rights, scientific evidence and the reduction of social damages and health by the misuse of illicit drugs.Keywords: illicit drugs, international organizations, prohibitionism, public health, World Health Organization
Procedia PDF Downloads 157759 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Clement Yeboah, Eva Laryea
Abstract:
A pretest-posttest within subjects experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant, indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant, indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop an interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, computer game-based learning, statistics achievement, statistics anxiety
Procedia PDF Downloads 78758 A Novel Nano-Chip Card Assay as Rapid Test for Diagnosis of Lymphatic Filariasis Compared to Nano-Based Enzyme Linked Immunosorbent Assay
Authors: Ibrahim Aly, Manal Ahmed, Mahmoud M. El-Shall
Abstract:
Filariasis is a parasitic disease caused by small roundworms. The filarial worms are transmitted and spread by blood-feeding black flies and mosquitoes. Lymphatic filariasis (Elephantiasis) is caused by Wuchereriabancrofti, Brugiamalayi, and Brugiatimori. Elimination of Lymphatic filariasis necessitates an increasing demand for valid, reliable, and rapid diagnostic kits. Nanodiagnostics involve the use of nanotechnology in clinical diagnosis to meet the demands for increased sensitivity, specificity, and early detection in less time. The aim of this study was to evaluate the nano-based enzymelinked immunosorbent assay (ELISA) and novel nano-chip card as a rapid test for detection of filarial antigen in serum samples of human filariasis in comparison with traditional -ELISA. Serum samples were collected from an infected human with filarial gathered across Egypt's governorates. After receiving informed consenta total of 45 blood samples of infected individuals residing in different villages in Gharbea governorate, which isa nonendemic region for bancroftianfilariasis, healthy persons living in nonendemic locations (20 persons), as well as sera from 20 other parasites, affected patients were collected. The microfilaria was checked in thick smears of 20 µl night blood samples collected during 20-22 hrs. All of these individuals underwent the following procedures: history taking, clinical examination, and laboratory investigations, which included examination of blood samples for microfilaria using thick blood film and serological tests for detection of the circulating filarial antigen using polyclonal antibody- ELISA, nano-based ELISA, and nano-chip card. In the present study, a recently reported polyoclonal antibody specific to tegumental filarial antigen was used in developing nano-chip card and nano-ELISA compared to traditional ELISA for the detection of circulating filarial antigen in sera of patients with bancroftianfilariasis. The performance of the ELISA was evaluated using 45 serum samples. The ELISA was positive with sera from microfilaremicbancroftianfilariasis patients (n = 36) with a sensitivity of 80 %. Circulating filarial antigen was detected in 39/45 patients who were positive for circulating filarial antigen using nano-ELISA with a sensitivity of 86.6 %. On the other hand, 42 out of 45 patients were positive for circulating filarial antigen using nano-chip card with a sensitivity of 93.3%.In conclusion, using a novel nano-chip assay could potentially be a promising alternative antigen detection test for bancroftianfilariasis.Keywords: lymphatic filariasis, nanotechnology, rapid diagnosis, elisa technique
Procedia PDF Downloads 115757 Low-carbon Footprint Diluents in Solvent Extraction for Lithium-ion Battery Recycling
Authors: Abdoulaye Maihatchi Ahamed, Zubin Arora, Benjamin Swobada, Jean-yves Lansot, Alexandre Chagnes
Abstract:
Lithium-ion battery (LiB) is the technology of choice in the development of electric vehicles. But there are still many challenges, including the development of positive electrode materials exhibiting high cycle ability, high energy density, and low environmental impact. For this latter, LiBs must be manufactured in a circular approach by developing the appropriate strategies to reuse and recycle them. Presently, the recycling of LiBs is carried out by the pyrometallurgical route, but more and more processes implement or will implement the hydrometallurgical route or a combination of pyrometallurgical and hydrometallurgical operations. After producing the black mass by mineral processing, the hydrometallurgical process consists in leaching the black mass in order to uptake the metals contained in the cathodic material. Then, these metals are extracted selectively by liquid-liquid extraction, solid-liquid extraction, and/or precipitation stages. However, liquid-liquid extraction combined with precipitation/crystallization steps is the most implemented operation in the LiB recycling process to selectively extract copper, aluminum, cobalt, nickel, manganese, and lithium from the leaching solution and precipitate these metals as high-grade sulfate or carbonate salts. Liquid-liquid extraction consists in contacting an organic solvent and an aqueous feed solution containing several metals, including the targeted metal(s) to extract. The organic phase is non-miscible with the aqueous phase. It is composed of an extractant to extract the target metals and a diluent, which is usually aliphatic kerosene produced from the petroleum industry. Sometimes, a phase modifier is added in the formulation of the extraction solvent to avoid the third phase formation. The extraction properties of the diluent do not depend only on the chemical structure of the extractant, but it may also depend on the nature of the diluent. Indeed, the interactions between the diluent can influence more or less the interactions between extractant molecules besides the extractant-diluent interactions. Only a few studies in the literature addressed the influence of the diluent on the extraction properties, while many studies focused on the effect of the extractants. Recently, new low-carbon footprint aliphatic diluents were produced by catalytic dearomatisation and distillation of bio-based oil. This study aims at investigating the influence of the nature of the diluent on the extraction properties of three extractants towards cobalt, nickel, manganese, copper, aluminum, and lithium: Cyanex®272 for nickel-cobalt separation, DEHPA for manganese extraction, and Acorga M5640 for copper extraction. The diluents used in the formulation of the extraction solvents are (i) low-odor aliphatic kerosene produced from the petroleum industry (ELIXORE 180, ELIXORE 230, ELIXORE 205, and ISANE IP 175) and (ii) bio-sourced aliphatic diluents (DEV 2138, DEV 2139, DEV 1763, DEV 2160, DEV 2161 and DEV 2063). After discussing the effect of the diluents on the extraction properties, this conference will address the development of a low carbon footprint process based on the use of the best bio-sourced diluent for the production of high-grade cobalt sulfate, nickel sulfate, manganese sulfate, and lithium carbonate, as well as metal copper.Keywords: diluent, hydrometallurgy, lithium-ion battery, recycling
Procedia PDF Downloads 88756 Participation of Women in the Brazilian Paralympic Sports
Authors: Ana Carolina Felizardo Da Silva
Abstract:
People with disabilities are those who have limitations of a physical, mental, intellectual or sensory nature and who, therefore, should not be excluded or marginalized. In Brazil, the Brazilian Law for the Inclusion of People with Disabilities defines that people with disabilities have the right to culture, sport, tourism and leisure on an equal basis with other people. Sport for people with disabilities, in its genesis, had a character aimed at rehabilitating men and soldiers, that is, the male figure who returned wounded from war and needed care. By gaining practitioners, the marketing issue emerges and, successively, high performance, what we call Paralympic sport. We found that sport for people with disabilities was designed for men, corroborating the social idea that sport is a masculine and masculinizing environment. In this way, the inclusion of women with disabilities in sports becomes a double challenge because they are women and have a disability. From data collected from official documents of the International Paralympic Committee, it is found that the first report on the participation of women in the Paralympic Games was in 1948, in England, in Stoke Mandeville, a championship considered the firstborn of the games, later, became called the “Paralympic Games”. However, due to the lack of information, the return of the appearance of women participating in the Paralympics took place after long 40 years, in 1984, which demonstrates a large gap of records on the official website referring to women in the games. Despite the great challenge, the number of women has been growing substantially. When collecting data from participants of all 16 editions of the Paralympic Games, in its last edition, held in Tokyo, out of 4,400 competing athletes, 1,853 were women, which represents 42% of the total number of athletes. In this same edition, we had the largest delegation of Brazilian women, represented by 96 athletes out of a total of 260 Brazilian athletes. It is estimated that in the next edition, to be taken place in Paris in 2024, the participation of women will equal or surpass that of men. The certain invisibility of women participating in the Paralympic Games is noticed when we access the database of the Brazilian Paralympic Committee website. It is possible to identify all women medalists of a given edition. On the other side, participating female athletes who did not medal are not registered on the site. Regarding the participation of Brazilian women in the Paralympics, there was a considerable growth in the last two editions, in 2012 there were only 69 women participating, going to 102 in 2016 and 96 in 2021. The same happened in relation to the medalists, going from 8 Brazilians in 2012 to 33 in 2016 and 27 in 2021. In this sense, the present study, aims to analyze how Brazilian women participate in the Paralympics, giving visibility and voice to female athletes. Structured interviews are being carried out with the participants of the games, identifying the difficulties and potentialities of participating with athletes in the competition. The analysis will be carried out through Bardin’s content analysis.Keywords: paralympics, sport for people with disabilities, woman, woman in sport
Procedia PDF Downloads 78755 Development of a Reduced Multicomponent Jet Fuel Surrogate for Computational Fluid Dynamics Application
Authors: Muhammad Zaman Shakir, Mingfa Yao, Zohaib Iqbal
Abstract:
This study proposed four Jet fuel surrogate (S1, S2 S3, and 4) with careful selection of seven large hydrocarbon fuel components, ranging from C₉-C₁₆ of higher molecular weight and higher boiling point, adapting the standard molecular distribution size of the actual jet fuel. The surrogate was composed of seven components, including n-propyl cyclohexane (C₉H₁₈), n- propylbenzene (C₉H₁₂), n-undecane (C₁₁H₂₄), n- dodecane (C₁₂H₂₆), n-tetradecane (C₁₄H₃₀), n-hexadecane (C₁₆H₃₄) and iso-cetane (iC₁₆H₃₄). The skeletal jet fuel surrogate reaction mechanism was developed by two approaches, firstly based on a decoupling methodology by describing the C₄ -C₁₆ skeletal mechanism for the oxidation of heavy hydrocarbons and a detailed H₂ /CO/C₁ mechanism for prediction of oxidation of small hydrocarbons. The combined skeletal jet fuel surrogate mechanism was compressed into 128 species, and 355 reactions and thereby can be used in computational fluid dynamics (CFD) simulation. The extensive validation was performed for individual single-component including ignition delay time, species concentrations profile and laminar flame speed based on various fundamental experiments under wide operating conditions, and for their blended mixture, among all the surrogate, S1 has been extensively validated against the experimental data in a shock tube, rapid compression machine, jet-stirred reactor, counterflow flame, and premixed laminar flame over wide ranges of temperature (700-1700 K), pressure (8-50 atm), and equivalence ratio (0.5-2.0) to capture the properties target fuel Jet-A, while the rest of three surrogate S2, S3 and S4 has been validated for Shock Tube ignition delay time only to capture the ignition characteristic of target fuel S-8 & GTL, IPK and RP-3 respectively. Based on the newly proposed HyChem model, another four surrogate with similar components and composition, was developed and parallel validations data was used as followed for previously developed surrogate but at high-temperature condition only. After testing the mechanism prediction performance of surrogates developed by the decoupling methodology, the comparison was done with the results of surrogates developed by the HyChem model. It was observed that all of four proposed surrogates in this study showed good agreement with the experimental measurements and the study comes to this conclusion that like the decoupling methodology HyChem model also has a great potential for the development of oxidation mechanism for heavy alkanes because of applicability, simplicity, and compactness.Keywords: computational fluid dynamics, decoupling methodology Hychem, jet fuel, surrogate, skeletal mechanism
Procedia PDF Downloads 137