Search results for: measurement validity
339 Changing Behaviour in the Digital Era: A Concrete Use Case from the Domain of Health
Authors: Francesca Spagnoli, Shenja van der Graaf, Pieter Ballon
Abstract:
Humans do not behave rationally. We are emotional, easily influenced by others, as well as by our context. The study of human behaviour became a supreme endeavour within many academic disciplines, including economics, sociology, and clinical and social psychology. Understanding what motivates humans and triggers them to perform certain activities, and what it takes to change their behaviour, is central both for researchers and companies, as well as policy makers to implement efficient public policies. While numerous theoretical approaches for diverse domains such as health, retail, environment have been developed, the methodological models guiding the evaluation of such research have reached for a long time their limits. Within this context, digitisation, the Information and communication technologies (ICT) and wearable, the Internet of Things (IoT) connecting networks of devices, and new possibilities to collect and analyse massive amounts of data made it possible to study behaviour from a realistic perspective, as never before. Digital technologies make it possible to (1) capture data in real-life settings, (2) regain control over data by capturing the context of behaviour, and (3) analyse huge set of information through continuous measurement. Within this complex context, this paper describes a new framework for initiating behavioural change, capitalising on the digital developments in applied research projects and applicable both to academia, enterprises and policy makers. By applying this model, behavioural research can be conducted to address the issues of different domains, such as mobility, environment, health or media. The Modular Behavioural Analysis Approach (MBAA) is here described and firstly validated through a concrete use case within the domain of health. The results gathered have proven that disclosing information about health in connection with the use of digital apps for health, can be a leverage for changing behaviour, but it is only a first component requiring further follow-up actions. To this end, a clear definition of different 'behavioural profiles', towards which addressing several typologies of interventions, it is essential to effectively enable behavioural change. In the refined version of the MBAA a strong focus will rely on defining a methodology for shaping 'behavioural profiles' and related interventions, as well as the evaluation of side-effects on the creation of new business models and sustainability plans.Keywords: behavioural change, framework, health, nudging, sustainability
Procedia PDF Downloads 220338 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 69337 Reduction of Residual Stress by Variothermal Processing and Validation via Birefringence Measurement Technique on Injection Molded Polycarbonate Samples
Authors: Christoph Lohr, Hanna Wund, Peter Elsner, Kay André Weidenmann
Abstract:
Injection molding is one of the most commonly used techniques in the industrial polymer processing. In the conventional process of injection molding, the liquid polymer is injected into the cavity of the mold, where the polymer directly starts hardening at the cooled walls. To compensate the shrinkage, which is caused predominantly by the immediate cooling, holding pressure is applied. Through that whole process, residual stresses are produced by the temperature difference of the polymer melt and the injection mold and the relocation of the polymer chains, which were oriented by the high process pressures and injection speeds. These residual stresses often weaken or change the structural behavior of the parts or lead to deformation of components. One solution to reduce the residual stresses is the use of variothermal processing. Hereby the mold is heated – i.e. near/over the glass transition temperature of the polymer – the polymer is injected and before opening the mold and ejecting the part the mold is cooled. For the next cycle, the mold gets heated again and the procedure repeats. The rapid heating and cooling of the mold are realized indirectly by convection of heated and cooled liquid (here: water) which is pumped through fluid channels underneath the mold surface. In this paper, the influences of variothermal processing on the residual stresses are analyzed with samples in a larger scale (500 mm x 250 mm x 4 mm). In addition, the influence on functional elements, such as abrupt changes in wall thickness, bosses, and ribs, on the residual stress is examined. Therefore the polycarbonate samples are produced by variothermal and isothermal processing. The melt is injected into a heated mold, which has in our case a temperature varying between 70 °C and 160 °C. After the filling of the cavity, the closed mold is cooled down varying from 70 °C to 100 °C. The pressure and temperature inside the mold are monitored and evaluated with cavity sensors. The residual stresses of the produced samples are illustrated by birefringence where the effect on the refractive index on the polymer under stress is used. The colorful spectrum can be uncovered by placing the sample between a polarized light source and a second polarization filter. To show the achievement and processing effects on the reduction of residual stress the birefringence images of the isothermal and variothermal produced samples are compared and evaluated. In this comparison to the variothermal produced samples have a lower amount of maxima of each color spectrum than the isothermal produced samples, which concludes that the residual stress of the variothermal produced samples is lower.Keywords: birefringence, injection molding, polycarbonate, residual stress, variothermal processing
Procedia PDF Downloads 282336 Environmental Accounting: A Conceptual Study of Indian Context
Authors: Pradip Kumar Das
Abstract:
As the entire world continues its rapid move towards industrialization, it has seriously threatened mankind’s ability to maintain an ecological balance. Geographical and natural forces have a significant influence on the location of industries. Industrialization is the foundation stone of the development of any country, while the unplanned industrialization and discharge of waste by industries is the cause of environmental pollution. There is growing degree of awareness and concern globally among nations about environmental degradation or pollution. Environmental resources endowed by the gift of nature and not manmade are invaluable natural resources of a country like India. Any developmental activity is directly related to natural and environmental resources. Economic development without environmental considerations brings about environmental crises and damages the quality of life of present, as well as future generation. As corporate sectors in the global market, especially in India, are becoming anxious about environmental degradation, naturally more and more emphasis will be ascribed to how environment-friendly the outcomes are. Maintaining accounts of such environmental and natural resources in the country has become more urgent. Moreover, international awareness and acceptance of the importance of environmental issues has motivated the development of a branch of accounting called “Environmental Accounting”. Environmental accounting attempts to detect and focus the resources consumed and the costs rendered by an industrial unit to the environment. For the sustainable development of mankind, a healthy environment is indispensable. Gradually, therefore, in many countries including India, environment matters are being given top most priority. Accounting and disclosure of environmental matters have been increasingly manifesting as an important dimension of corporate accounting and reporting practices. But, as conventional accounting deals with mainly non-living things, the formulation of valuation, and measurement and accounting techniques for incorporating environment-related matters in the corporate financial statement sometimes creates problems for the accountant. In the light of this situation, the conceptual analysis of the study is concerned with the rationale of environmental accounting on the economy and society as a whole, and focuses the failures of the traditional accounting system. A modest attempt has been made to throw light on the environmental awareness in developing nations like India and discuss the problems associated with the implementation of environmental accounting. The conceptual study also reflects that despite different anomalies, environmental accounting is becoming an increasing important aspect of the accounting agenda within the corporate sector in India. Lastly, a conclusion, along with recommendations, has been given to overcome the situation.Keywords: environmental accounting, environmental degradation, environmental management, environmental resources
Procedia PDF Downloads 340335 Developing and Testing a Questionnaire of Music Memorization and Practice
Authors: Diana Santiago, Tania Lisboa, Sophie Lee, Alexander P. Demos, Monica C. S. Vasconcelos
Abstract:
Memorization has long been recognized as an arduous and anxiety-evoking task for musicians, and yet, it is an essential aspect of performance. Research shows that musicians are often not taught how to memorize. While memorization and practice strategies of professionals have been studied, little research has been done to examine how student musicians learn to practice and memorize music in different cultural settings. We present the process of developing and testing a questionnaire of music memorization and musical practice for student musicians in the UK and Brazil. A survey was developed for a cross-cultural research project aiming at examining how young orchestral musicians (aged 7–18 years) in different learning environments and cultures engage in instrumental practice and memorization. The questionnaire development included members of a UK/US/Brazil research team of music educators and performance science researchers. A pool of items was developed for each aspect of practice and memorization identified, based on literature, personal experiences, and adapted from existing questionnaires. Item development took the varying levels of cognitive and social development of the target populations into consideration. It also considered the diverse target learning environments. Items were initially grouped in accordance with a single underlying construct/behavior. The questionnaire comprised three sections: a demographics section, a section on practice (containing 29 items), and a section on memorization (containing 40 items). Next, the response process was considered and a 5-point Likert scale ranging from ‘always’ to ‘never’ with a verbal label and an image assigned to each response option was selected, following effective questionnaire design for children and youths. Finally, a pilot study was conducted with young orchestral musicians from diverse learning environments in Brazil and the United Kingdom. Data collection took place in either one-to-one or group settings to facilitate the participants. Cognitive interviews were utilized to establish response process validity by confirming the readability and accurate comprehension of the questionnaire items or highlighting the need for item revision. Internal reliability was investigated by measuring the consistency of the item groups using the statistical test Cronbach’s alpha. The pilot study successfully relied on the questionnaire to generate data about the engagement of young musicians of different levels and instruments, across different learning and cultural environments, in instrumental practice and memorization. Interaction analysis of the cognitive interviews undertaken with these participants, however, exposed the fact that certain items, and the response scale, could be interpreted in multiple ways. The questionnaire text was, therefore, revised accordingly. The low Cronbach’s Alpha scores of many item groups indicated another issue with the original questionnaire: its low level of internal reliability. Several reasons for each poor reliability can be suggested, including the issues with item interpretation revealed through interaction analysis of the cognitive interviews, the small number of participants (34), and the elusive nature of the construct in question. The revised questionnaire measures 78 specific behaviors or opinions. It can be seen to provide an efficient means of gathering information about the engagement of young musicians in practice and memorization on a large scale.Keywords: cross-cultural, memorization, practice, questionnaire, young musicians
Procedia PDF Downloads 122334 The Effects of Nano Zerovalent Iron (nZVI) and Magnesium Oxide Nanoparticles on Methane Production during Anaerobic Digestion of Waste Activated Sludge
Authors: Passkorn Khanthongthip, John T. Novak
Abstract:
Many studies have been reported that the nZVI and MgO NPs were often found in waste activated sludge (WAS). However, little is known about the impact of those NPs on WAS stabilization. The aims of this study were to investigate the effects of both NPs on WAS anaerobic digestion for methane production and to examine the change of metanogenic population under those different environments using qPCR. Four dosages (2, 50, 100, and 200 mg/g-TSS) of MgO NPs were added to four different bottles containing WAS to investigate the impact of MgO NPs on methane production during WAS anaerobic digestion. The effects of nZVI on methane production during WAS anaerobic digestion were also conducted in another four bottles using the same methods described above except that the MgO NPs were replaced by nZVI. A bottle of WAS anaerobic digestion without nanoparticles addition was also operated to serve as a control. It was found that the relative amounts, compared to the control system, of methane production in each WAS anaerobic digestion bottle adding 2, 50, 100, 200 mg/gTSS MgO NPs were 98, 62, 28, and 14 %, respectively. This suggests that higher MgO NPs resulted in lower methane production. The data of batch test for the effects of corresponding released Mg2+ indicated that 50 mg/gTSS MgO NPs or higher could inhibit methane production at least 25%. Moreover, the volatile fatty acid (VFA) concentration was 328, 384, 928, 3,684, and 7,848 mg/L for the control and four WAS anaerobic digestion bottles with 2, 50, 100, 200 mg/gTSS MgO NPs addition, respectively. Higher VFA concentration could reduce pH and subsequently decrease methanogen growth, resulting in lower methane production. The relative numbers of total gene copies of methanogens analyzed from samples taken from WAS anaerobic digestion bottles were approximately 99, 68, 38, and 24 % of control for the addition of 2, 50, 100, and 200 mg/gTSS, respectively. Obviously, the more MgO NPs appeared in sludge anaerobic digestion system, the less methanogens remained. In contrast, the relative amount of methane production found in another four WAS anaerobic digestion bottles adding 2, 50, 100, and 200 mg/gTSS nZVI were 102, 128, 112, and 104 % of the control, respectively. The measurement of methanogenic population indicated that the relative content of methanogen gene copies were 101, 132, 120, and 112 % of those found in control, respectively. Additionally, the cumulative VFA was 320, 234, 308, and 330 mg/L, respectively. This reveals that nZVI addition could assist to increase methanogenic population. Higher amount of methanogen accelerated VFA degradation for greater methane production, resulting in lower VFA accumulation in digesters. Moreover, the data for effects of corresponding released Fe2+ conducted by batch tests suggest that the addition of approximately 50 mg/gTSS nZVI increased methane production by 20%. In conclusion, the presence of MgO NPs appeared to diminish the methane production during WAS anaerobic digestion. Higher MgO NPs dosages resulted in more inhibition on methane production. In contrast, nZVI addition promoted the amount of methanogenic population which facilitated methane production.Keywords: magnesium oxide nanoparticles, methane production, methanogenic population, nano zerovalent iron
Procedia PDF Downloads 294333 The Toxicity of Doxorubicin Connected with Nanotransporters
Authors: Iva Blazkova, Amitava Moulick, Vedran Milosavljevic, Pavel Kopel, Marketa Vaculovicova, Vojtech Adam, Rene Kizek
Abstract:
Doxorubicin is one of the most commonly used and the most effective chemotherapeutic drugs. This antracycline drug isolated from the bacteria Streptomyces peuceticus var. caesius is sold under the trade name Adriamycin (hydroxydaunomycin, hydroxydaunorubicin). Doxorubicin is used in single therapy to treat hematological malignancies (blood cancers, leukaemia, lymphoma), many types of carcinoma (solid tumors) and soft tissue sarcomas. It has many serious side effects like nausea and vomiting, hair lost, myelosupression, oral mucositis, skin reactions and redness, but the most serious one is the cardiotoxicity. Because of the risk of heart attack and congestive heart failure, the total dose administered to patients has to be accurately monitored. With the aim to lower the side effects and to targeted delivery of doxorubicin into the tumor tissue, the different nanoparticles are studied. The drug can be bound on a surface of nanoparticle, encapsulated in the inner cavity, or incorporated into the structure of nanoparticle. Among others, carbon nanoparticles (graphene, carbon nanotubes, fullerenes) are highly studied. Besides the number of inorganic nanoparticles, a great potential exhibit also organic ones mainly lipid-based and polymeric nanoparticle. The aim of this work was to perform a toxicity study of free doxorubicin compared to doxorubicin conjugated with various nanotransporters. The effect of liposomes, fullerenes, graphene, and carbon nanotubes on the toxicity was analyzed. As a first step, the binding efficacy of between doxorubicin and the nanotransporter was determined. The highest efficacy was detected in case of liposomes (85% of applied drug was encapsulated) followed by graphene, carbon nanotubes and fullerenes. For the toxicological studies, the chicken embryos incubated under controlled conditions (37.5 °C, 45% rH, rotation every 2 hours) were used. In 7th developmental day of chicken embryos doxorubicin or doxorubicin-nanotransporter complex was applied on the chorioallantoic membrane of the eggs and the viability was analyzed every day till the 17th developmental day. Then the embryos were extracted from the shell and the distribution of doxorubicin in the body was analyzed by measurement of organs extracts using laser induce fluorescence detection. The chicken embryo mortality caused by free doxorubicin (30%) was significantly lowered by using the conjugation with nanomaterials. The highest accumulation of doxorubicin and doxorubicin nanotransporter complexes was observed in the liver tissueKeywords: doxorubicin, chicken embryos, nanotransporters, toxicity
Procedia PDF Downloads 447332 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept
Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani
Abstract:
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy
Procedia PDF Downloads 342331 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System
Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb
Abstract:
The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model
Procedia PDF Downloads 62330 Manufacturing and Calibration of Material Standards for Optical Microscopy in Industrial Environments
Authors: Alberto Mínguez-Martínez, Jesús De Vicente Y Oliva
Abstract:
It seems that we live in a world in which the trend in industrial environments is the miniaturization of systems and materials and the fabrication of parts at the micro-and nano-scale. The problem arises when manufacturers want to study the quality of their production. This characteristic is becoming crucial due to the evolution of the industry and the development of Industry 4.0. As Industry 4.0 is based on digital models of production and processes, having accurate measurements becomes capital. At this point, the metrology field plays an important role as it is a powerful tool to ensure more stable production to reduce scrap and the cost of non-conformities. The most extended measuring instruments that allow us to carry out accurate measurements at these scales are optical microscopes, whether they are traditional, confocal, focus variation microscopes, profile projectors, or any other similar measurement system. However, the accuracy of measurements is connected to the traceability of them to the SI unit of length (the meter). The fact of providing adequate traceability to 2D and 3D dimensional measurements at micro-and nano-scale in industrial environments is a problem that is being studied, and it does not have a unique answer. In addition, if commercial material standards for micro-and nano-scale are considered, we can find that there are two main problems. On the one hand, those material standards that could be considered complete and very interesting do not give traceability of dimensional measurements and, on the other hand, their calibration is very expensive. This situation implies that these kinds of standards will not succeed in industrial environments and, as a result, they will work in the absence of traceability. To solve this problem in industrial environments, it becomes necessary to have material standards that are easy to use, agile, adaptive to different forms, cheap to manufacture and, of course, traceable to the definition of meter with simple methods. By using these ‘customized standards’, it would be possible to adapt and design measuring procedures for each application and manufacturers will work with some traceability. It is important to note that, despite the fact that this traceability is clearly incomplete, this situation is preferable to working in the absence of it. Recently, it has been demonstrated the versatility and the utility of using laser technology and other AM technologies to manufacture customized material standards. In this paper, the authors propose to manufacture a customized material standard using an ultraviolet laser system and a method to calibrate it. To conclude, the results of the calibration carried out in an accredited dimensional metrology laboratory are presented.Keywords: industrial environment, material standards, optical measuring instrument, traceability
Procedia PDF Downloads 121329 Investigating the Effect of Metaphor Awareness-Raising Approach on the Right-Hemisphere Involvement in Developing Japanese Learners’ Knowledge of Different Degrees of Politeness
Authors: Masahiro Takimoto
Abstract:
The present study explored how the metaphor awareness-raising approach affects the involvement of the right hemisphere in developing EFL learners’ knowledge regarding the different degrees of politeness embedded within different request expressions. The present study was motivated by theoretical considerations regarding the conceptual projection and the metaphorical idea of politeness is distance, as proposed; this study applied these considerations to develop Japanese learners’ knowledge regarding the different politeness degrees and to explore the connection between the metaphorical concept projection and right-hemisphere dominance. Japanese EFL learners do not know certain language strategies (e.g., English requests can be mitigated with biclausal downgraders, including the if-clause with past-tense modal verbs) and have difficulty adjusting the politeness degrees attached to request expressions according to situations. The present study used a pre/post-test design to reaffirm the efficacy of the cognitive technique and its connection to right-hemisphere involvement by mouth asymmetry technique. Mouth asymmetry measurement has been utilized because speech articulation, normally controlled mainly by one side of the brain, causes muscles on the opposite side of the mouth to move more during speech production. The present research did not administer the delayed post-test because it emphasized determining whether metaphor awareness-raising approaches for developing EFL learners’ pragmatic proficiency entailed right-hemisphere activation. Each test contained an acceptability judgment test (AJT) along with a speaking test in the post-test. The study results show that the metaphor awareness-raising group performed significantly better than the control group with regard to acceptability judgment and speaking tests post-test. These data revealed that the metaphor awareness-raising approach could promote L2 learning because it aided input enhancement and concept projection; through these aspects, the participants were able to comprehend an abstract concept: the degree of politeness in terms of the spatial concept of distance. Accordingly, the proximal-distal metaphor enabled the study participants to connect the newly spatio-visualized concept of distance to the different politeness degrees attached to different request expressions; furthermore, they could recall them with the left side of the mouth being wider than the right. This supported certain findings from previous studies that indicated the possible involvement of the brain's right hemisphere in metaphor processing.Keywords: metaphor awareness-raising, right hemisphere, L2 politeness, mouth asymmetry
Procedia PDF Downloads 152328 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model
Authors: Ichiro Takahashi
Abstract:
One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability
Procedia PDF Downloads 207327 TiO₂ Nanotube Array Based Selective Vapor Sensors for Breath Analysis
Authors: Arnab Hazra
Abstract:
Breath analysis is a quick, noninvasive and inexpensive technique for disease diagnosis can be used on people of all ages without any risk. Only a limited number of volatile organic compounds (VOCs) can be associated with the occurrence of specific diseases. These VOCs can be considered as disease markers or breath markers. Selective detection with specific concentration of breath marker in exhaled human breath is required to detect a particular disease. For example, acetone (C₃H₆O), ethanol (C₂H₅OH), ethane (C₂H₆) etc. are the breath markers and abnormal concentrations of these VOCs in exhaled human breath indicates the diseases like diabetes mellitus, renal failure, breast cancer respectively. Nanomaterial-based vapor sensors are inexpensive, small and potential candidate for the detection of breath markers. In practical measurement, selectivity is the most crucial issue where trace detection of breath marker is needed to identify accurately in the presence of several interfering vapors and gases. Current article concerns a novel technique for selective and lower ppb level detection of breath markers at very low temperature based on TiO₂ nanotube array based vapor sensor devices. Highly ordered and oriented TiO₂ nanotube array was synthesized by electrochemical anodization of high purity tatinium (Ti) foil. 0.5 wt% NH₄F, ethylene glycol and 10 vol% H₂O was used as the electrolyte and anodization was carried out for 90 min with 40 V DC potential. Au/TiO₂ Nanotube/Ti, sandwich type sensor device was fabricated for the selective detection of VOCs in low concentration range. Initially, sensor was characterized where resistive and capacitive change of the sensor was recorded within the valid concentration range for individual breath markers (or organic vapors). Sensor resistance was decreased and sensor capacitance was increased with the increase of vapor concentration. Now, the ratio of resistive slope (mR) and capacitive slope (mC) provided a concentration independent constant term (M) for a particular vapor. For the detection of unknown vapor, ratio of resistive change and capacitive change at any concentration was same to the previously calculated constant term (M). After successful identification of the target vapor, concentration was calculated from the straight line behavior of resistance as a function of concentration. Current technique is suitable for the detection of particular vapor from a mixture of other interfering vapors.Keywords: breath marker, vapor sensors, selective detection, TiO₂ nanotube array
Procedia PDF Downloads 154326 Analysis of Urban Flooding in Wazirabad Catchment of Kabul City with Help of Geo-SWMM
Authors: Fazli Rahim Shinwari, Ulrich Dittmer
Abstract:
Like many megacities around the world, Kabul is facing severe problems due to the rising frequency of urban flooding. Since 2001, Kabul is experiencing rapid population growth because of the repatriation of refugees and internal migration. Due to unplanned development, green areas inside city and hilly areas within and around the city are converted into new housing towns that had increased runoff. Trenches along the roadside comprise the unplanned drainage network of the city that drains the combined sewer flow. In rainy season overflow occurs, and after streets become dry, the dust particles contaminate the air which is a major cause of air pollution in Kabul city. In this study, a stormwater management model is introduced as a basis for a systematic approach to urban drainage planning in Kabul. For this purpose, Kabul city is delineated into 8 watersheds with the help of one-meter resolution LIDAR DEM. Storm, water management model, is developed for Wazirabad catchment by using available data and literature values. Due to lack of long term metrological data, the model is only run for hourly rainfall data of a rain event that occurred in April 2016. The rain event from 1st to 3rd April with maximum intensity of 3mm/hr caused huge flooding in Wazirabad Catchment of Kabul City. Model-estimated flooding at some points of the catchment as an actual measurement of flooding was not possible; results were compared with information obtained from local people, Kabul Municipality and Capital Region Independent Development Authority. The model helped to identify areas where flooding occurred because of less capacity of drainage system and areas where the main reason for flooding is due to blockage in the drainage canals. The model was used for further analysis to find a sustainable solution to the problem. The option to construct new canals was analyzed, and two new canals were proposed that will reduce the flooding frequency in Wazirabad catchment of Kabul city. By developing the methodology to develop a stormwater management model from digital data and information, the study had fulfilled the primary objective, and similar methodology can be used for other catchments of Kabul city to prepare an emergency and long-term plan for drainage system of Kabul city.Keywords: urban hydrology, storm water management, modeling, SWMM, GEO-SWMM, GIS, identification of flood vulnerable areas, urban flooding analysis, sustainable urban drainage
Procedia PDF Downloads 151325 A Proposed Treatment Protocol for the Management of Pars Interarticularis Pathology in Children and Adolescents
Authors: Paul Licina, Emma M. Johnston, David Lisle, Mark Young, Chris Brady
Abstract:
Background: Lumbar pars pathology is a common cause of pain in the growing spine. It can be seen in young athletes participating in at-risk sports and can affect sporting performance and long-term health due to its resistance to traditional management. There is a current lack of consensus of classification and treatment for pars injuries. Previous systems used CT to stage pars defects but could not assess early stress reactions. A modified classification is proposed that considers findings on MRI, significantly improving early treatment guidance. The treatment protocol is designed for patients aged 5 to 19 years. Method: Clinical screening identifies patients with a low, medium, or high index of suspicion for lumbar pars injury using patient age, sport participation and pain characteristics. MRI of the at-risk cohort enables augmentation of existing CT-based classification while avoiding ionising radiation. Patients are classified into five categories based on MRI findings. A type 0 lesion (stress reaction) is present when CT is normal and MRI shows high signal change (HSC) in the pars/pedicle on T2 images. A type 1 lesion represents the ‘early defect’ CT classification. The group previously referred to as a 'progressive stage' defect on CT can be split into 2A and 2B categories. 2As have HSC on MRI, whereas 2Bs do not. This distinction is important with regard to healing potential. Type 3 lesions are terminal stage defects on CT, characterised by pseudarthrosis. MRI shows no HSC. Results: Stress reactions (type 0) and acute fractures (1 and 2a) can heal and are treated in a custom-made hard brace for 12 weeks. It is initially worn 23 hours per day. At three weeks, patients commence basic core rehabilitation. At six weeks, in the absence of pain, the brace is removed for sleeping. Exercises are progressed to positions of daily living. Patients with continued pain remain braced 23 hours per day without exercise progression until becoming symptom-free. At nine weeks, patients commence supervised exercises out of the brace for 30 minutes each day. This allows them to re-learn muscular control without rigid support of the brace. At 12 weeks, bracing ceases and MRI is repeated. For patients with near or complete resolution of bony oedema and healing of any cortical defect, rehabilitation is focused on strength and conditioning and sport-specific exercise for the full return to activity. The length of this final stage is approximately nine weeks but depends on factors such as development and level of sports participation. If significant HSC remains on MRI, CT scan is considered to definitively assess cortical defect healing. For these patients, return to high-risk sports is delayed for up to three months. Chronic defects (2b and 3) cannot heal and are not braced, and rehabilitation follows traditional protocols. Conclusion: Appropriate clinical screening and imaging with MRI can identify pars pathology early. In those with potential for healing, we propose hard bracing and appropriate rehabilitation as part of a multidisciplinary management protocol. The validity of this protocol will be tested in future studies.Keywords: adolescents, MRI classification, pars interticularis, treatment protocol
Procedia PDF Downloads 152324 About the State of Students’ Career Guidance in the Conditions of Inclusive Education in the Republic of Kazakhstan
Authors: Laura Butabayeva, Svetlana Ismagulova, Gulbarshin Nogaibayeva, Maiya Temirbayeva, Aidana Zhussip
Abstract:
Over the years of independence, Kazakhstan has not only ratified international documents regulating the rights of children to Inclusive education, but also developed its own inclusive educational policy. Along with this, the state pays particular attention to high school students' preparedness for professional self-determination. However, a number of problematic issues in this field have been revealed, such as the lack of systemic mechanisms coordinating stakeholders’ actions in preparing schoolchildren for a conscious choice of in-demand profession, meeting their individual capabilities and special educational needs (SEN). The analysis of the state’s current situation indicates school graduates’ adaptation to the labor market does not meet existing demands of the society. According to the Ministry of Labor and Social Protection of the Population of the Republic of Kazakhstan, about 70 % of Kazakhstani school graduates find themselves difficult to choose a profession, 87 % of schoolchildren make their career choice under the influence of parents and school teachers, 90 % of schoolchildren and their parents have no idea about the most popular professions on the market. The results of the study conducted by KorlanSyzdykova in 2016 indicated the urgent need of Kazakhstani school graduates in obtaining extensive information about in- demand professions and receiving professional assistance in choosing a profession in accordance with their individual skills, abilities, and preferences. The results of the survey, conducted by Information and Analytical Center among heads of colleges in 2020, showed that despite significant steps in creating conditions for students with SEN, they face challenges in studying because of poor career guidance provided to them in schools. The results of the study, conducted by the Center for Inclusive Education of the National Academy of Education named after Y. Altynsarin in the state’s general education schools in 2021, demonstrated the lack of career guidance, pedagogical and psychological support for children with SEN. To investigate these issues, the further study was conducted to examine the state of students’ career guidance and socialization, taking into account their SEN. The hypothesis of this study proposed that to prepare school graduates for a conscious career choice, school teachers and specialists need to develop their competencies in early identification of students' interests, inclinations, SEN and ensure necessary support for them. The state’s 5 regions were involved in the study according to the geographical location. The triangulation approach was utilized to ensure the credibility and validity of research findings, including both theoretical (analysis of existing statistical data, legal documents, results of previous research) and empirical (school survey for students, interviews with parents, teachers, representatives of school administration) methods. The data were analyzed independently and compared to each other. The survey included questions related to provision of pedagogical support for school students in making their career choice. Ethical principles were observed in the process of developing the methodology, collecting, analyzing the data and distributing the results. Based on the results, methodological recommendations on students’ career guidance for school teachers and specialists were developed, taking into account the former’s individual capabilities and SEN.Keywords: career guidance, children with special educational needs, inclusive education, Kazakhstan
Procedia PDF Downloads 170323 Distinct Patterns of Resilience Identified Using Smartphone Mobile Experience Sampling Method (M-ESM) and a Dual Model of Mental Health
Authors: Hussain-Abdulah Arjmand, Nikki S. Rickard
Abstract:
The response to stress can be highly heterogenous, and may be influenced by methodological factors. The integrity of data will be optimized by measuring both positive and negative affective responses to an event, by measuring responses in real time as close to the stressful event as possible, and by utilizing data collection methods that do not interfere with naturalistic behaviours. The aim of the current study was to explore short term prototypical responses to major stressor events on outcome measures encompassing both positive and negative indicators of psychological functioning. A novel mobile experience sampling methodology (m-ESM) was utilized to monitor both effective responses to stressors in real time. A smartphone mental health app (‘Moodprism’) which prompts users daily to report both their positive and negative mood, as well as whether any significant event had occurred in the past 24 hours, was developed for this purpose. A sample of 142 participants was recruited as part of the promotion of this app. Participants’ daily reported experience of stressor events, levels of depressive symptoms and positive affect were collected across a 30 day period as they used the app. For each participant, major stressor events were identified on the subjective severity of the event rated by the user. Depression and positive affect ratings were extracted for the three days following the event. Responses to the event were scaled relative to their general reactivity across the remainder of the 30 day period. Participants were first clustered into groups based on initial reactivity and subsequent recovery following a stressor event. This revealed distinct patterns of responding along depressive symptomatology and positive affect. Participants were then grouped based on allocations to clusters in each outcome variable. A highly individualised nature in which participants respond to stressor events, in symptoms of depression and levels of positive affect, was observed. A complete description of the novel profiles identified will be presented at the conference. These findings suggest that real-time measurement of both positive and negative functioning to stressors yields a more complex set of responses than previously observed with retrospective reporting. The use of smartphone technology to measure individualized responding also proved to shed significant insight.Keywords: depression, experience sampling methodology, positive functioning, resilience
Procedia PDF Downloads 237322 Selecting the Best Risk Exposure to Assess Collision Risks in Container Terminals
Authors: Mohammad Ali Hasanzadeh, Thierry Van Elslander, Eddy Van De Voorde
Abstract:
About 90 percent of world merchandise trade by volume being carried by sea. Maritime transport remains as back bone behind the international trade and globalization meanwhile all seaborne goods need using at least two ports as origin and destination. Amid seaborne traded cargos, container traffic is a prosperous market with about 16% in terms of volume. Albeit containerized cargos are less in terms of tonnage but, containers carry the highest value cargos amongst all. That is why efficient handling of containers in ports is very important. Accidents are the foremost causes that lead to port inefficiency and a surge in total transport cost. Having different port safety management systems (PSMS) in place, statistics on port accidents show that numerous accidents occur in ports. Some of them claim peoples’ life; others damage goods, vessels, port equipment and/or the environment. Several accident investigation illustrate that the most common accidents take place throughout transport operation, it sometimes accounts for 68.6% of all events, therefore providing a safer workplace depends on reducing collision risk. In order to quantify risks at the port area different variables can be used as exposure measurement. One of the main motives for defining and using exposure in studies related to infrastructure is to account for the differences in intensity of use, so as to make comparisons meaningful. In various researches related to handling containers in ports and intermodal terminals, different risk exposures and also the likelihood of each event have been selected. Vehicle collision within the port area (10-7 per kilometer of vehicle distance travelled) and dropping containers from cranes, forklift trucks, or rail mounted gantries (1 x 10-5 per lift) are some examples. According to the objective of the current research, three categories of accidents selected for collision risk assessment; fall of container during ship to shore operation, dropping container during transfer operation and collision between vehicles and objects within terminal area. Later on various consequences, exposure and probability identified for each accident. Hence, reducing collision risks profoundly rely on picking the right risk exposures and probability of selected accidents, to prevent collision accidents in container terminals and in the framework of risk calculations, such risk exposures and probabilities can be useful in assessing the effectiveness of safety programs in ports.Keywords: container terminal, collision, seaborne trade, risk exposure, risk probability
Procedia PDF Downloads 371321 Engine Thrust Estimation by Strain Gauging of Engine Mount Assembly
Authors: Rohit Vashistha, Amit Kumar Gupta, G. P. Ravishankar, Mahesh P. Padwale
Abstract:
Accurate thrust measurement is required for aircraft during takeoff and after ski-jump. In a developmental aircraft, takeoff from ship is extremely critical and thrust produced by the engine should be known to the pilot before takeoff so that if thrust produced is not sufficient then take-off can be aborted and accident can be avoided. After ski-jump, thrust produced by engine is required because the horizontal speed of aircraft is less than the normal takeoff speed. Engine should be able to produce enough thrust to provide nominal horizontal takeoff speed to the airframe within prescribed time limit. The contemporary low bypass gas turbine engines generally have three mounts where the two side mounts transfer the engine thrust to the airframe. The third mount only takes the weight component. It does not take any thrust component. In the present method of thrust estimation, the strain gauging of the two side mounts is carried out. The strain produced at various power settings is used to estimate the thrust produced by the engine. The quarter Wheatstone bridge is used to acquire the strain data. The engine mount assembly is subjected to Universal Test Machine for determination of equivalent elasticity of assembly. This elasticity value is used in the analytical approach for estimation of engine thrust. The estimated thrust is compared with the test bed load cell thrust data. The experimental strain data is also compared with strain data obtained from FEM analysis. Experimental setup: The strain gauge is mounted on the tapered portion of the engine mount sleeve. Two strain gauges are mounted on diametrically opposite locations. Both of the strain gauges on the sleeve were in the horizontal plane. In this way, these strain gauges were not taking any strain due to the weight of the engine (except negligible strain due to material's poison's ratio) or the hoop's stress. Only the third mount strain gauge will show strain when engine is not running i.e. strain due to weight of engine. When engine starts running, all the load will be taken by the side mounts. The strain gauge on the forward side of the sleeve was showing a compressive strain and the strain gauge on the rear side of the sleeve shows a tensile strain. Results and conclusion: the analytical calculation shows that the hoop stresses dominate the bending stress. The estimated thrust by strain gauge shows good accuracy at higher power setting as compared to lower power setting. The accuracy of estimated thrust at max power setting is 99.7% whereas at lower power setting is 78%.Keywords: engine mounts, finite elements analysis, strain gauge, stress
Procedia PDF Downloads 479320 Experimental Investigation of Cutting Forces and Temperature in Bone Drilling
Authors: Vishwanath Mali, Hemant Warhatkar, Raju Pawade
Abstract:
Drilling of bone has been always challenging for surgeons due to the adverse effect it may impart to bone tissues. Force has to be applied manually by the surgeon while performing conventional bone drilling which may lead to permanent death of bone tissues and nerves. During bone drilling the temperature of the bone tissues increases to higher values above 47 ⁰C that causes thermal osteonecrosis resulting into screw loosening and subsequent implant failures. An attempt has been made here to study the input drilling parameters and surgical drill bit geometry affecting bone health during bone drilling. A One Factor At a Time (OFAT) method is used to plan the experiments. Input drilling parameters studied include spindle speed and feed rate. The drill bit geometry parameter studied include point angle and helix angle. The output variables are drilling thrust force and bone temperature. The experiments were conducted on goat femur bone at room temperature 30 ⁰C. For measurement of thrust forces KISTLER cutting force dynamometer Type 9257BA was used. For continuous data acquisition of temperature NI LabVIEW software was used. Fixture was made on RPT machine for holding the bone specimen while performing drilling operation. Bone specimen were preserved in deep freezer (LABTOP make) under -40 ⁰C. In case of drilling parameters, it is observed that at constant feed rate when spindle speed increases, thrust force as well as temperature decreases and at constant spindle speed when feed rate increases thrust force as well as temperature increases. The effect of drill bit geometry shows that at constant helix angle when point angle increases thrust force as well as temperature increases and at constant point angle when helix angle increase thrust force as well as temperature decreases. Hence it is concluded that as the thrust force increases temperature increases. In case of drilling parameter, the lowest thrust force and temperature i.e. 35.55 N and 36.04 ⁰C respectively were recorded at spindle speed 2000 rpm and feed rate 0.04 mm/rev. In case of drill bit geometry parameter, the lowest thrust force and temperature i.e. 40.81 N and 34 ⁰C respectively were recorded at point angle 70⁰ and helix angle 25⁰ Hence to avoid thermal necrosis of bone it is recommended to use higher spindle speed, lower feed rate, low point angle and high helix angle. The hard nature of cortical bone contributes to a greater rise in temperature whereas a considerable drop in temperature is observed during cancellous bone drilling.Keywords: bone drilling, helix angle, point angle, thrust force, temperature, thermal necrosis
Procedia PDF Downloads 308319 The Impact of the Variation of Sky View Factor on Landscape Degree of Enclosure of Urban Blue and Green Belt
Authors: Yi-Chun Huang, Kuan-Yun Chen, Chuang-Hung Lin
Abstract:
Urban Green Belt and Blue is a part of the city landscape, it is an important constituent element of the urban environment and appearance. The Hsinchu East Gate Moat is situated in the center of the city, which not only has a wealth of historical and cultural resources, but also combines the Green Belt and the Blue Belt qualities at the same time. The Moat runs more than a thousand meters through the vital Green Belt and the Blue Belt in downtown, and each section is presented in different qualities of moat from south to north. The water area and the green belt of surroundings are presented linear and banded spread. The water body and the rich diverse river banks form an urban green belt of rich layers. The watercourse with green belt design lets users have connections with blue belts in different ways; therefore, the integration of Hsinchu East Gate and moat have become one of the unique urban landscapes in Taiwan. The study is based on the fact-finding case of Hsinchu East Gate Moat where situated in northern Taiwan, to research the impact between the SVF variation of the city and spatial sequence of Urban Green Belt and Blue landscape and visual analysis by constituent cross-section, and then comparing the influence of different leaf area index – the variable ecological factors to the degree of enclosure. We proceed to survey the landscape design of open space, to measure existing structural features of the plant canopy which contain the height of plants and branches, the crown diameter, breast-height diameter through access to diagram of Geographic Information Systems (GIS) and on-the-spot actual measurement. The north and south districts of blue green belt areas are divided 20 meters into a unit from East Gate Roundabout as the epicenter, and to set up a survey points to measure the SVF above the survey points; then we proceed to quantitative analysis from the data to calculate open landscape degree of enclosure. The results can be reference for the composition of future river landscape and the practical operation for dynamic space planning of blue and green belt landscape.Keywords: sky view factor, degree of enclosure, spatial sequence, leaf area indices
Procedia PDF Downloads 555318 Temperature Dependent Magneto-Transport Properties of MnAl Binary Alloy Thin Films
Authors: Vineet Barwal, Sajid Husain, Nanhe Kumar Gupta, Soumyarup Hait, Sujeet Chaudhary
Abstract:
High perpendicular magnetic anisotropy (PMA) and low damping constant (α) in ferromagnets are one of the few necessary requirements for their potential applications in the field of spintronics. In this regards, ferromagnetic τ-phase of MnAl possesses the highest PMA (Ku > 107 erg/cc) at room temperature, high saturation magnetization (Ms~800 emu/cc) and a Curie temperature of ~395K. In this work, we have investigated the magnetotransport behaviour of this potentially useful binary system MnₓAl₁₋ₓ films were synthesized by co-sputtering (pulsed DC magnetron sputtering) on Si/SiO₂ (where SiO₂ is native oxide layer) substrate using 99.99% pure Mn and Al sputtering targets. Films of constant thickness (~25 nm) were deposited at the different growth temperature (Tₛ) viz. 30, 300, 400, 500, and 600 ºC with a deposition rate of ~5 nm/min. Prior to deposition, the chamber was pumped down to a base pressure of 2×10⁻⁷ Torr. During sputtering, the chamber was maintained at a pressure of 3.5×10⁻³ Torr with the 55 sccm Ar flow rate. Films were not capped for the purpose of electronic transport measurement, which leaves a possibility of metal oxide formation on the surface of MnAl (both Mn and Al have an affinity towards oxide formation). In-plane and out-of-plane transverse magnetoresistance (MR) measurements on films sputtered under optimized growth conditions revealed non-saturating behavior with MR values ~6% and 40% at 9T, respectively at 275 K. Resistivity shows a parabolic dependence on the field H, when the H is weak. At higher H, non-saturating positive MR that increases exponentially with the strength of magnetic field is observed, a typical character of hopping type conduction mechanism. An anomalous decrease in MR is observed on lowering the temperature. From the temperature dependence of reistivity, it is inferred that the two competing states are metallic and semiconducting, respectively and the energy scale of the phenomenon produces the most interesting effects, i.e., the metal-insulator transition and hence the maximum sensitivity to external fields, at room temperature. Theory of disordered 3D systems effectively explains the crossover temperature coefficient of resistivity from positive to negative with lowering of temperature. These preliminary findings on the MR behavior of MnAl thin films will be presented in detail. The anomalous large MR in mixed phase MnAl system is evidently useful for future spintronic applications.Keywords: magnetoresistance, perpendicular magnetic anisotropy, spintronics, thin films
Procedia PDF Downloads 124317 Rangeland Monitoring by Computerized Technologies
Abstract:
Every piece of rangeland has a different set of physical and biological characteristics. This requires the manager to synthesis various information for regular monitoring to define changes trend to get wright decision for sustainable management. So range managers need to use computerized technologies to monitor rangeland, and select. The best management practices. There are four examples of computerized technologies that can benefit sustainable management: (1) Photographic method for cover measurement: The method was tested in different vegetation communities in semi humid and arid regions. Interpretation of pictures of quadrats was done using Arc View software. Data analysis was done by SPSS software using paired t test. Based on the results, generally, photographic method can be used to measure ground cover in most vegetation communities. (2) GPS application for corresponding ground samples and satellite pixels: In two provinces of Tehran and Markazi, six reference points were selected and in each point, eight GPS models were tested. Significant relation among GPS model, time and location with accuracy of estimated coordinates was found. After selection of suitable method, in Markazi province coordinates of plots along four transects in each 6 sites of rangelands was recorded. The best time of GPS application was in the morning hours, Etrex Vista had less error than other models, and a significant relation among GPS model, time and location with accuracy of estimated coordinates was found. (3) Application of satellite data for rangeland monitoring: Focusing on the long term variation of vegetation parameters such as vegetation cover and production is essential. Our study in grass and shrub lands showed that there were significant correlations between quantitative vegetation characteristics and satellite data. So it is possible to monitor rangeland vegetation using digital data for sustainable utilization. (4) Rangeland suitability classification with GIS: Range suitability assessment can facilitate sustainable management planning. Three sub-models of sensitivity to erosion, water suitability and forage production out puts were entered to final range suitability classification model. GIS was facilitate classification of range suitability and produced suitability maps for sheep grazing. Generally digital computers assist range managers to interpret, modify, calibrate or integrating information for correct management.Keywords: computer, GPS, GIS, remote sensing, photographic method, monitoring, rangeland ecosystem, management, suitability, sheep grazing
Procedia PDF Downloads 365316 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 123315 Improving the Technology of Assembly by Use of Computer Calculations
Authors: Mariya V. Yanyukina, Michael A. Bolotov
Abstract:
Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.Keywords: accuracy, assembly, interacting dimension chains, turbine
Procedia PDF Downloads 372314 Effects of Bone Marrow Derived Mesenchymal Stem Cells (MSC) in Acute Respiratory Distress Syndrome (ARDS) Lung Remodeling
Authors: Diana Islam, Juan Fang, Vito Fanelli, Bing Han, Julie Khang, Jianfeng Wu, Arthur S. Slutsky, Haibo Zhang
Abstract:
Introduction: MSC delivery in preclinical models of ARDS has demonstrated significant improvements in lung function and recovery from acute injury. However, the role of MSC delivery in ARDS associated pulmonary fibrosis is not well understood. Some animal studies using bleomycin, asbestos, and silica-induced pulmonary fibrosis show that MSC delivery can suppress fibrosis. While other animal studies using radiation induced pulmonary fibrosis, liver, and kidney fibrosis models show that MSC delivery can contribute to fibrosis. Hypothesis: The beneficial and deleterious effects of MSC in ARDS are modulated by the lung microenvironment at the time of MSC delivery. Methods: To induce ARDS a two-hit mouse model of Hydrochloric acid (HCl) aspiration (day 0) and mechanical ventilation (MV) (day 2) was used. HCl and injurious MV generated fibrosis within 14-28 days. 0.5x106 mouse MSCs were delivered (via both intratracheal and intravenous routes) either in the active inflammatory phase (day 2) or during the remodeling phase (day 14) of ARDS (mouse fibroblasts or PBS used as a control). Lung injury accessed using inflammation score and elastance measurement. Pulmonary fibrosis was accessed using histological score, tissue collagen level, and collagen expression. In addition alveolar epithelial (E) and mesenchymal (M) marker expression profile was also measured. All measurements were taken at day 2, 14, and 28. Results: MSC delivery 2 days after HCl exacerbated lung injury and fibrosis compared to HCl alone, while the day 14 delivery showed protective effects. However in the absence of HCl, MSC significantly reduced the injurious MV-induced fibrosis. HCl injury suppressed E markers and up-regulated M markers. MSC delivery 2 days after HCl further amplified M marker expression, indicating their role in myofibroblast proliferation/activation. While with 14-day delivery E marker up-regulation was observed indicating their role in epithelial restoration. Conclusions: Early MSC delivery can be protective of injurious MV. Late MSC delivery during repair phase may also aid in recovery. However, early MSC delivery during the exudative inflammatory phase of HCl-induced ARDS can result in pro-fibrotic profiles. It is critical to understand the interaction between MSC and the lung microenvironment before MSC-based therapies are utilized for ARDS.Keywords: acute respiratory distress syndrome (ARDS), mesenchymal stem cells (MSC), hydrochloric acid (HCl), mechanical ventilation (MV)
Procedia PDF Downloads 667313 System Devices to Reduce Particulate Matter Concentrations in Railway Metro Systems
Authors: Armando Cartenì
Abstract:
Within the design of sustainable transportation engineering, the problem of reducing particulate matter (PM) concentrations in railways metro system was not much discussed. It is well known that PM levels in railways metro system are mainly produced by mechanical friction at the rail-wheel-brake interactions and by the PM re-suspension caused by the turbulence generated by the train passage, which causes dangerous problems for passenger health. Starting from these considerations, the aim of this research was twofold: i) to investigate the particulate matter concentrations in a ‘traditional’ railways metro system; ii) to investigate the particulate matter concentrations of a ‘high quality’ metro system equipped with design devices useful for reducing PM concentrations: platform screen doors, rubber-tyred and an advanced ventilation system. Two measurement surveys were performed: one in the ‘traditional’ metro system of Naples (Italy) and onother in the ‘high quality’ rubber-tyred metro system of Turin (Italy). Experimental results regarding the ‘traditional’ metro system of Naples, show that the average PM10 concentrations measured in the underground station platforms are very high and range between 172 and 262 µg/m3 whilst the average PM2,5 concentrations range between 45 and 60 µg/m3, with dangerous problems for passenger health. By contrast the measurements results regarding the ‘high quality’ metro system of Turin show that: i) the average PM10 (PM2.5) concentrations measured in the underground station platform is 22.7 µg/m3 (16.0 µg/m3) with a standard deviation of 9.6 µg/m3 (7.6 µg/m3); ii) the indoor concentrations (both for PM10 and for PM2.5) are statistically lower from those measured in outdoors (with a ratio equal to 0.9-0.8), meaning that the indoor air quality is greater than those in urban ambient; iii) that PM concentrations in underground stations are correlated to the trains passage; iv) the inside trains concentrations (both for PM10 and for PM2.5) are statistically lower from those measured at station platform (with a ratio equal to 0.7-0.8), meaning that inside trains the use of air conditioning system could promote a greater circulation that clean the air. The comparison among the two case studies allow to conclude that the metro system designed with PM reduction devices allow to reduce PM concentration up to 11 times against a ‘traditional’ one. From these results, it is possible to conclude that PM concentrations measured in a ‘high quality’ metro system are significantly lower than the ones measured in a ‘traditional’ railway metro systems. This result allows possessing the bases for the design of useful devices for retrofitting metro systems all around the world.Keywords: air quality, pollutant emission, quality in public transport, underground railway, external cost reduction, transportation planning
Procedia PDF Downloads 210312 Evaluation of Vitamin D Levels in Obese and Morbid Obese Children
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Obesity may lead to growing serious health problems throughout the world. Vitamin D appears to play a role in cardiovascular and metabolic health. Vitamin D deficiency may add to derangements in human metabolic systems, particularly those of children. Childhood obesity is associated with an increased risk of chronic and sophisticated diseases. The aim of this study is to investigate associations as well as possible differences related to parameters affected by obesity and their relations with vitamin D status in obese (OB) and morbid obese (MO) children. This study included a total of 78 children. Of them, 41 and 37 were OB and MO, respectively. WHO BMI-for age percentiles were used for the classification of obesity. The values above 99 percentile were defined as MO. Those between 95 and 99 percentiles were included into OB group. Anthropometric measurements were recorded. Basal metabolic rates (BMRs) were measured. Vitamin D status is determined by the measurement of 25-hydroxy cholecalciferol [25- hydroxyvitamin D3, 25(OH)D] using high-performance liquid chromatography. Vitamin D status was evaluated as deficient, insufficient and sufficient. Values < 20.0 ng/ml, values between 20-30 ng/ml and values > 30.0 ng/ml were defined as vitamin D deficient, insufficient and sufficient, respectively. Optimal 25(OH)D level was defined as ≥ 30 ng/ml. SPSSx statistical package program was used for the evaluation of the data. The statistical significance degree was accepted as p < 0.05. Mean ages did not differ between the groups. Significantly increased body mass index (BMI), waist circumference (C) and neck C as well as significantly decreased fasting blood glucose (FBG) and vitamin D values were observed in MO group (p < 0.05). In OB group, 37.5% of the children were vitamin D deficient, and in MO group the corresponding value was 53.6%. No difference between the groups in terms of lipid profile, systolic blood pressure (SBP), diastolic blood pressure (DBP) and insulin values was noted. There was a severe statistical significance between FBG values of the groups (p < 0.001). Important correlations between BMI, waist C, hip C, neck C and both SBP as well as DBP were found in OB group. In MO group, correlations only with SBP were obtained. In a similar manner, in OB group, correlations were detected between SBP-BMR and DBP-BMR. However, in MO children, BMR correlated only with SBP. The associations of vitamin D with anthropometric indices as well as some lipid parameters were defined. In OB group BMI, waist C, hip C and triglycerides (TRG) were negatively correlated with vitamin D concentrations whereas none of them were detected in MO group. Vitamin D deficiency may contribute to the complications associated with childhood obesity. Loss of correlations between obesity indices-DBP, vitamin D-TRG, as well as relatively lower FBG values, observed in MO group point out that the emergence of MetS components starts during obesity state just before the transition to morbid obesity. Aside from its deficiency state, associations of vitamin D with anthropometric measurements, blood pressures and TRG should also be evaluated before the development of morbid obesity.Keywords: children, morbid obesity, obesity, vitamin D
Procedia PDF Downloads 139311 Digital Image Correlation: Metrological Characterization in Mechanical Analysis
Authors: D. Signore, M. Ferraiuolo, P. Caramuta, O. Petrella, C. Toscano
Abstract:
The Digital Image Correlation (DIC) is a newly developed optical technique that is spreading in all engineering sectors because it allows the non-destructive estimation of the entire surface deformation without any contact with the component under analysis. These characteristics make the DIC very appealing in all the cases the global deformation state is to be known without using strain gages, which are the most used measuring device. The DIC is applicable to any material subjected to distortion caused by either thermal or mechanical load, allowing to obtain high-definition mapping of displacements and deformations. That is why in the civil and the transportation industry, DIC is very useful for studying the behavior of metallic materials as well as of composite materials. DIC is also used in the medical field for the characterization of the local strain field of the vascular tissues surface subjected to uniaxial tensile loading. DIC can be carried out in the two dimension mode (2D DIC) if a single camera is used or in a three dimension mode (3D DIC) if two cameras are involved. Each point of the test surface framed by the cameras can be associated with a specific pixel of the image, and the coordinates of each point are calculated knowing the relative distance between the two cameras together with their orientation. In both arrangements, when a component is subjected to a load, several images related to different deformation states can be are acquired through the cameras. A specific software analyzes the images via the mutual correlation between the reference image (obtained without any applied load) and those acquired during the deformation giving the relative displacements. In this paper, a metrological characterization of the digital image correlation is performed on aluminum and composite targets both in static and dynamic loading conditions by comparison between DIC and strain gauges measures. In the static test, interesting results have been obtained thanks to an excellent agreement between the two measuring techniques. In addition, the deformation detected by the DIC is compliant with the result of a FEM simulation. In the dynamic test, the DIC was able to follow with a good accuracy the periodic deformation of the specimen giving results coherent with the ones given by FEM simulation. In both situations, it was seen that the DIC measurement accuracy depends on several parameters such as the optical focusing, the parameters chosen to perform the mutual correlation between the images and, finally, the reference points on image to be analyzed. In the future, the influence of these parameters will be studied, and a method to increase the accuracy of the measurements will be developed in accordance with the requirements of the industries especially of the aerospace one.Keywords: accuracy, deformation, image correlation, mechanical analysis
Procedia PDF Downloads 310310 The Effect and Durability of Functional Exercises on Balance Evaluation Systems Test (Bestest) in Intellectual Disabilities: A Preliminary Report
Authors: Saeid Bahiraei, Hassan Daneshmandi , Ali Asghar Norasteh
Abstract:
The present study aims at the effects of 8 weeks of selected corrective exercise training in stable and unstable levels on the postural control people with ID. Problems and limitations of movement in individuals with intellectual disability (ID) are highly common, which particularly may cause the loss of basic performance and limitation of the person's independence in doing their daily activities. In the present study, thirty-four young adult intellectual disabilities were selected randomly and divided into three groups. In order to measure the balance variable indicators, BESTest was used. The intervention group did the selected performance exercise in 8 weeks (3 times of 45 to 50 minutes a week). Meanwhile, the control group did not experience any kind of exercise. Statistical analysis was performed in SPSS on a significant level (p<0/05). The results showed the compromise between time and the group in all the BESTest tests is significant (P=0/001). The results of the research test compared to the studied groups with time measurements showed that there is a significant difference in the unstable group in Biomechanical constraints (P<0/05). And also, a significant difference exists in the stable and unstable level instability limits/Vertically, Postural responses, and Anticipatory postural adjustment variables (except for the follow-up and pre-test levels), Stability in Gait and Sensory Orientation in the pre-test, post-test, and follow up- pre-test stage of the test (P<0/05). In the comparison between the times of measurement with the groups under study, the results showed that Biomechanical Constraints, Anticipatory Postural adjustment and Postural responses at the pre-test-follow upstage, there was a significant difference between unstable-stable and unstable-control groups (P<0/05), it was also significant between all groups in Stability Limits/Vertically, Sensory Orientation, Stability in Gait and Overall stability index variables (P<0/05). The findings showed that the practice group at an unstable level has move improvement compared to the practice group at a stable level. In conclusion, this study presents evidence that shows selected performative practices can be recognized as a comprehensive and effective mediator in the betterment and improvement of the balance in intellectually disabled people and also affect the performative and moving activities.Keywords: intellectual disability, BSETest, rehabilitation, postural control
Procedia PDF Downloads 176