Search results for: dynamic panel models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10671

Search results for: dynamic panel models

1071 Tailorability of Poly(Aspartic Acid)/BSA Complex by Self-Assembling in Aqueous Solutions

Authors: Loredana E. Nita, Aurica P. Chiriac, Elena Stoleru, Alina Diaconu, Tudorachi Nita

Abstract:

Self-assembly processes are an attractive method to form new and complex structures between macromolecular compounds to be used for specific applications. In this context, intramolecular and intermolecular bonds play a key role during self-assembling processes in preparation of carrier systems of bioactive substances. Polyelectrolyte complexes (PECs) are formed through electrostatic interactions, and though they are significantly below of the covalent linkages in their strength, these complexes are sufficiently stable owing to the association processes. The relative ease way of PECs formation makes from them a versatile tool for preparation of various materials, with properties that can be tuned by adjusting several parameters, such as the chemical composition and structure of polyelectrolytes, pH and ionic strength of solutions, temperature and post-treatment procedures. For example, protein-polyelectrolyte complexes (PPCs) are playing an important role in various chemical and biological processes, such as protein separation, enzyme stabilization and polymer drug delivery systems. The present investigation is focused on evaluation of the PPC formation between a synthetic polypeptide (poly(aspartic acid) – PAS) and a natural protein (bovine serum albumin - BSA). The PPC obtained from PAS and BSA in different ratio was investigated by corroboration of various techniques of characterization as: spectroscopy, microscopy, thermo-gravimetric analysis, DLS and zeta potential determination, measurements which were performed in static and/or dynamic conditions. The static contact angle of the sample films was also determined in order to evaluate the changes brought upon surface free energy of the prepared PPCs in interdependence with the complexes composition. The evolution of hydrodynamic diameter and zeta potential of the PPC, recorded in situ, confirm changes of both co-partners conformation, a 1/1 ratio between protein and polyelectrolyte being benefit for the preparation of a stable PPC. Also, the study evidenced the dependence of PPC formation on the temperature of preparation. Thus, at low temperatures the PPC is formed with compact structure, small dimension and hydrodynamic diameter, close to those of BSA. The behavior at thermal treatment of the prepared PPCs is in agreement with the composition of the complexes. From the contact angle determination results the increase of the PPC films cohesion, which is higher than that of BSA films. Also, a higher hydrophobicity corresponds to the new PPC films denoting a good adhesion of the red blood cells onto the surface of PSA/BSA interpenetrated systems. The SEM investigation evidenced as well the specific internal structure of PPC concretized in phases with different size and shape in interdependence with the interpolymer mixture composition.

Keywords: polyelectrolyte – protein complex, bovine serum albumin, poly(aspartic acid), self-assembly

Procedia PDF Downloads 236
1070 A Protocol of Procedures and Interventions to Accelerate Post-Earthquake Reconstruction

Authors: Maria Angela Bedini, Fabio Bronzini

Abstract:

The Italian experiences, positive and negative, of the post-earthquake are conditioned by long times and structural bureaucratic constraints, also motivated by the attempt to contain mafia infiltration and corruption. The transition from the operational phase of the emergency to the planning phase of the reconstruction project is thus hampered by a series of inefficiencies and delays, incompatible with the need for rapid recovery of the territories in crisis. In fact, intervening in areas affected by seismic events means at the same time associating the reconstruction plan with an urban and territorial rehabilitation project based on strategies and tools in which prevention and safety play a leading role in the regeneration of territories in crisis and the return of the population. On the contrary, the earthquakes that took place in Italy have instead further deprived the territories affected of the minimum requirements for habitability, in terms of accessibility and services, accentuating the depopulation process, already underway before the earthquake. The objective of this work is to address with implementing and programmatic tools the procedures and strategies to be put in place, today and in the future, in Italy and abroad, to face the challenge of the reconstruction of activities, sociality, services, risk mitigation: a protocol of operational intentions and firm points, open to a continuous updating and implementation. The methodology followed is that of the comparison in a synthetic form between the different Italian experiences of the post-earthquake, based on facts and not on intentions, to highlight elements of excellence or, on the contrary, damage. The main results obtained can be summarized in technical comparison cards on good and bad practices. With this comparison, we intend to make a concrete contribution to the reconstruction process, certainly not only related to the reconstruction of buildings but privileging the primary social and economic needs. In this context, the recent instrument applied in Italy of the strategic urban and territorial SUM (Minimal Urban Structure) and the strategic monitoring process become dynamic tools for supporting reconstruction. The conclusions establish, by points, a protocol of interventions, the priorities for integrated socio-economic strategies, multisectoral and multicultural, and highlight the innovative aspects of 'inversion' of priorities in the reconstruction process, favoring the take-off of 'accelerator' interventions social and economic and a more updated system of coexistence with risks. In this perspective, reconstruction as a necessary response to the calamitous event can and must become a unique opportunity to raise the level of protection from risks and rehabilitation and development of the most fragile places in Italy and abroad.

Keywords: an operational protocol for reconstruction, operational priorities for coexistence with seismic risk, social and economic interventions accelerators of building reconstruction, the difficult post-earthquake reconstruction in Italy

Procedia PDF Downloads 125
1069 Thermo-Economic Evaluation of Sustainable Biogas Upgrading via Solid-Oxide Electrolysis

Authors: Ligang Wang, Theodoros Damartzis, Stefan Diethelm, Jan Van Herle, François Marechal

Abstract:

Biogas production from anaerobic digestion of organic sludge from wastewater treatment as well as various urban and agricultural organic wastes is of great significance to achieve a sustainable society. Two upgrading approaches for cleaned biogas can be considered: (1) direct H₂ injection for catalytic CO₂ methanation and (2) CO₂ separation from biogas. The first approach usually employs electrolysis technologies to generate hydrogen and increases the biogas production rate; while the second one usually applies commercially-available highly-selective membrane technologies to efficiently extract CO₂ from the biogas with the latter being then sent afterward for compression and storage for further use. A straightforward way of utilizing the captured CO₂ is on-site catalytic CO₂ methanation. From the perspective of system complexity, the second approach may be questioned, since it introduces an additional expensive membrane component for producing the same amount of methane. However, given the circumstance that the sustainability of the produced biogas should be retained after biogas upgrading, renewable electricity should be supplied to drive the electrolyzer. Therefore, considering the intermittent nature and seasonal variation of renewable electricity supply, the second approach offers high operational flexibility. This indicates that these two approaches should be compared based on the availability and scale of the local renewable power supply and not only the technical systems themselves. Solid-oxide electrolysis generally offers high overall system efficiency, and more importantly, it can achieve simultaneous electrolysis of CO₂ and H₂O (namely, co-electrolysis), which may bring significant benefits for the case of CO₂ separation from the produced biogas. When taking co-electrolysis into account, two additional upgrading approaches can be proposed: (1) direct steam injection into the biogas with the mixture going through the SOE, and (2) CO₂ separation from biogas which can be used later for co-electrolysis. The case study of integrating SOE to a wastewater treatment plant is investigated with wind power as the renewable power. The dynamic production of biogas is provided on an hourly basis with the corresponding oxygen and heating requirements. All four approaches mentioned above are investigated and compared thermo-economically: (a) steam-electrolysis with grid power, as the base case for steam electrolysis, (b) CO₂ separation and co-electrolysis with grid power, as the base case for co-electrolysis, (c) steam-electrolysis and CO₂ separation (and storage) with wind power, and (d) co-electrolysis and CO₂ separation (and storage) with wind power. The influence of the scale of wind power supply is investigated by a sensitivity analysis. The results derived provide general understanding on the economic competitiveness of SOE for sustainable biogas upgrading, thus assisting the decision making for biogas production sites. The research leading to the presented work is funded by European Union’s Horizon 2020 under grant agreements n° 699892 (ECo, topic H2020-JTI-FCH-2015-1) and SCCER BIOSWEET.

Keywords: biogas upgrading, solid-oxide electrolyzer, co-electrolysis, CO₂ utilization, energy storage

Procedia PDF Downloads 149
1068 Compositional Assessment of Fermented Rice Bran and Rice Bran Oil and Their Effect on High Fat Diet Induced Animal Model

Authors: Muhammad Ali Siddiquee, Md. Alauddin, Md. Omar Faruque, Zakir Hossain Howlader, Mohammad Asaduzzaman

Abstract:

Rice bran (RB) and rice bran oil (RBO) are explored as prominent food components worldwide. In this study, fermented rice bran (FRB) was produced by employing edible gram-positive bacteria (Lactobacillus acidophilus, Lactobacillus bulgaricus, and Bifidobacterium bifidum) at 125 x 10⁵ spore g⁻¹ of rice bran, and investigated to evaluate nutritional quality. The crude rice bran oil (CRBO) was extracted from RB, and its quality was also investigated compared to market-available rice bran oil (MRBO) in Bangladesh. We found that fermentation of rice bran with lactic acid bacteria increased total proteins (29.52%), fat (5.38%), ash (48.47%), crude fiber (38.96%), and moisture (61.04%) and reduced the carbohydrate content (36.61%). We also found that essential amino acids (methionine, tryptophan, threonine, valine, leucine, lysine, histidine, and phenylalanine) and non-essential amino acids (alanine, aspartate, glycine, glutamine, proline, serine, and tyrosine) were increased in FRB except methionine and proline. Moreover, total phenolic content, tannin content, flavonoid content, and antioxidant activity were increased in FRB. The RBO analysis showed that γ-oryzanol content (10.00mg/g) was found in CRBO compared to MRBO (ranging from 7.40 to 12.70 mg/g) and Vitamin-E content 0.20% was found higher in CRBO compared to MRBO (ranging 0.097 to 0.12%). The total saturated (25.16%) and total unsaturated fatty acids (74.44%) were found in CRBO, whereas MRBO contained total saturated (22.08 to 24.13%) and total unsaturated fatty acids (71.91 to 83.29%), respectively. The physiochemical parameters were found satisfactory in all samples except acid value and peroxide value higher in CRBO. Finally, animal experiments showed that FRB and CRBO reduce the body weight, glucose, and lipid profile in high-fat diet-induced animal models. Thus, FRB and RBO could be value-added food supplements for human health.

Keywords: fermented rice bran, crude rice bran oil, amino acids, proximate composition, gamma-oryzanol, fatty acids, heavy metals, physiochemical parameters

Procedia PDF Downloads 59
1067 Comparison with Mechanical Behaviors of Mastication in Teeth Movement Cases

Authors: Jae-Yong Park, Yeo-Kyeong Lee, Hee-Sun Kim

Abstract:

Purpose: This study aims at investigating the mechanical behaviors of mastication, according to various teeth movement. There are three masticatory cases which are general case and 2 cases of teeth movement. General case includes the common arrange of all teeth and 2 cases of teeth movement are that one is the half movement location case of molar teeth in no. 14 tooth seat after extraction of no. 14 tooth and the other is no. 14 tooth seat location case of molar teeth after extraction in the same case before. Materials and Methods: In order to analyze these cases, 3 dimensional finite element (FE) model of the skull were generated based on computed tomography images, 964 dicom files of 38 year old male having normal occlusion status. An FE model in general occlusal case was used to develop CAE procedure. This procedure was applied to FE models in other occlusal cases. The displacement controls according to loading condition were applied effectively to simulate occlusal behaviors in all cases. From the FE analyses, von Mises stress distribution of skull and teeth was observed. The von Mises stress, effective stress, had been widely used to determine the absolute stress value, regardless of stress direction and yield characteristics of materials. Results: High stress was distributed over the periodontal area of mandible under molar teeth when the mandible was transmitted to the coronal-apical direction in the general occlusal case. According to the stress propagation from teeth to cranium, stress distribution decreased as the distribution propagated from molar teeth to infratemporal crest of the greater wing of the sphenoid bone and lateral pterygoid plate in general case. In 2 cases of teeth movement, there were observed that high stresses were distributed over the periodontal area of mandible under teeth where they are located under the moved molar teeth in cranium. Conclusion: The predictions of the mechanical behaviors of general case and 2 cases of teeth movement during the masticatory process were investigated including qualitative validation. The displacement controls as the loading condition were applied effectively to simulate occlusal behaviors in 2 cases of teeth movement of molar teeth.

Keywords: cranium, finite element analysis, mandible, masticatory action, occlusal force

Procedia PDF Downloads 388
1066 Airborne Particulate Matter Passive Samplers for Indoor and Outdoor Exposure Monitoring: Development and Evaluation

Authors: Kholoud Abdulaziz, Kholoud Al-Najdi, Abdullah Kadri, Konstantinos E. Kakosimos

Abstract:

The Middle East area is highly affected by air pollution induced by anthropogenic and natural phenomena. There is evidence that air pollution, especially particulates, greatly affects the population health. Many studies have raised a warning of the high concentration of particulates and their affect not just around industrial and construction areas but also in the immediate working and living environment. One of the methods to study air quality is continuous and periodic monitoring using active or passive samplers. Active monitoring and sampling are the default procedures per the European and US standards. However, in many cases they have been inefficient to accurately capture the spatial variability of air pollution due to the small number of installations; which eventually is attributed to the high cost of the equipment and the limited availability of users with expertise and scientific background. Another alternative has been found to account for the limitations of the active methods that is the passive sampling. It is inexpensive, requires no continuous power supply, and easy to assemble which makes it a more flexible option, though less accurate. This study aims to investigate and evaluate the use of passive sampling for particulate matter pollution monitoring in dry tropical climates, like in the Middle East. More specifically, a number of field measurements have be conducted, both indoors and outdoors, at Qatar and the results have been compared with active sampling equipment and the reference methods. The samples have been analyzed, that is to obtain particle size distribution, by applying existing laboratory techniques (optical microscopy) and by exploring new approaches like the white light interferometry to. Then the new parameters of the well-established model have been calculated in order to estimate the atmospheric concentration of particulates. Additionally, an extended literature review will investigate for new and better models. The outcome of this project is expected to have an impact on the public, as well, as it will raise awareness among people about the quality of life and about the importance of implementing research culture in the community.

Keywords: air pollution, passive samplers, interferometry, indoor, outdoor

Procedia PDF Downloads 394
1065 Accuracy Analysis of the American Society of Anesthesiologists Classification Using ChatGPT

Authors: Jae Ni Jang, Young Uk Kim

Abstract:

Background: Chat Generative Pre-training Transformer-3 (ChatGPT; San Francisco, California, Open Artificial Intelligence) is an artificial intelligence chatbot based on a large language model designed to generate human-like text. As the usage of ChatGPT is increasing among less knowledgeable patients, medical students, and anesthesia and pain medicine residents or trainees, we aimed to evaluate the accuracy of ChatGPT-3 responses to questions about the American Society of Anesthesiologists (ASA) classification based on patients’ underlying diseases and assess the quality of the generated responses. Methods: A total of 47 questions were submitted to ChatGPT using textual prompts. The questions were designed for ChatGPT-3 to provide answers regarding ASA classification in response to common underlying diseases frequently observed in adult patients. In addition, we created 18 questions regarding the ASA classification for pediatric patients and pregnant women. The accuracy of ChatGPT’s responses was evaluated by cross-referencing with Miller’s Anesthesia, Morgan & Mikhail’s Clinical Anesthesiology, and the American Society of Anesthesiologists’ ASA Physical Status Classification System (2020). Results: Out of the 47 questions pertaining to adults, ChatGPT -3 provided correct answers for only 23, resulting in an accuracy rate of 48.9%. Furthermore, the responses provided by ChatGPT-3 regarding children and pregnant women were mostly inaccurate, as indicated by a 28% accuracy rate (5 out of 18). Conclusions: ChatGPT provided correct responses to questions relevant to the daily clinical routine of anesthesiologists in approximately half of the cases, while the remaining responses contained errors. Therefore, caution is advised when using ChatGPT to retrieve anesthesia-related information. Although ChatGPT may not yet be suitable for clinical settings, we anticipate significant improvements in ChatGPT and other large language models in the near future. Regular assessments of ChatGPT's ASA classification accuracy are essential due to the evolving nature of ChatGPT as an artificial intelligence entity. This is especially important because ChatGPT has a clinically unacceptable rate of error and hallucination, particularly in pediatric patients and pregnant women. The methodology established in this study may be used to continue evaluating ChatGPT.

Keywords: American Society of Anesthesiologists, artificial intelligence, Chat Generative Pre-training Transformer-3, ChatGPT

Procedia PDF Downloads 40
1064 Adolescent-Parent Relationship as the Most Important Factor in Preventing Mood Disorders in Adolescents: An Application of Artificial Intelligence to Social Studies

Authors: Elżbieta Turska

Abstract:

Introduction: One of the most difficult times in a person’s life is adolescence. The experiences in this period may shape the future life of this person to a large extent. This is the reason why many young people experience sadness, dejection, hopelessness, sense of worthlessness, as well as losing interest in various activities and social relationships, all of which are often classified as mood disorders. As many as 15-40% adolescents experience depressed moods and for most of them they resolve and are not carried into adulthood. However, (5-6%) of those affected by mood disorders develop the depressive syndrome and as many as (1-3%) develop full-blown clinical depression. Materials: A large questionnaire was given to 2508 students, aged 13–16 years old, and one of its parts was the Burns checklist, i.e. the standard test for identifying depressed mood. The questionnaire asked about many aspects of the student’s life, it included a total of 53 questions, most of which had subquestions. It is important to note that the data suffered from many problems, the most important of which were missing data and collinearity. Aim: In order to identify the correlates of mood disorders we built predictive models which were then trained and validated. Our aim was not to be able to predict which students suffer from mood disorders but rather to explore the factors influencing mood disorders. Methods: The problems with data described above practically excluded using all classical statistical methods. For this reason, we attempted to use the following Artificial Intelligence (AI) methods: classification trees with surrogate variables, random forests and xgboost. All analyses were carried out with the use of the mlr package for the R programming language. Resuts: The predictive model built by classification trees algorithm outperformed the other algorithms by a large margin. As a result, we were able to rank the variables (questions and subquestions from the questionnaire) from the most to least influential as far as protection against mood disorder is concerned. Thirteen out of twenty most important variables reflect the relationships with parents. This seems to be a really significant result both from the cognitive point of view and also from the practical point of view, i.e. as far as interventions to correct mood disorders are concerned.

Keywords: mood disorders, adolescents, family, artificial intelligence

Procedia PDF Downloads 99
1063 Assessment of Influence of Short-Lasting Whole-Body Vibration on Joint Position Sense and Body Balance–A Randomised Masked Study

Authors: Anna Slupik, Anna Mosiolek, Sebastian Wojtowicz, Dariusz Bialoszewski

Abstract:

Introduction: Whole-body vibration (WBV) uses high frequency mechanical stimuli generated by a vibration plate and transmitted through bone, muscle and connective tissues to the whole body. Research has shown that long-term vibration-plate training improves neuromuscular facilitation, especially in afferent neural pathways, responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance and proprioception. Some researchers suggest that the vibration stimulus briefly inhibits the conduction of afferent signals from proprioceptors and can interfere with the maintenance of body balance. The aim of this study was to evaluate the influence of a single set of exercises associated with whole-body vibration on the joint position sense and body balance. Material and methods: The study enrolled 55 people aged 19-24 years. These individuals were randomly divided into a test group (30 persons) and a control group (25 persons). Both groups performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the test group. The control group performed exercises on the vibration plate while it was off. All participants were instructed to perform six dynamic exercises lasting 30 seconds each with a 60-second period of rest between them. The exercises involved large muscle groups of the trunk, pelvis and lower limbs. Measurements were carried out before and immediately after exercise. Joint position sense (JPS) was measured in the knee joint for the starting position at 45° in an open kinematic chain. JPS error was measured using a digital inclinometer. Balance was assessed in a standing position with both feet on the ground with the eyes open and closed (each test lasting 30 sec). Balance was assessed using Matscan with FootMat 7.0 SAM software. The surface of the ellipse of confidence and front-back as well as right-left swing were measured to assess balance. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p> 0.05). JPS did not change in both the test (10.7° vs. 8.4°) and control groups (9.0° vs. 8.4°). No significant differences were shown in any of the test parameters during balance tests with the eyes open or closed in both the test and control groups (p> 0.05). Conclusions. 1. Deterioration in proprioception or balance was not observed immediately after the vibration stimulus. This suggests that vibration-induced blockage of proprioceptive stimuli conduction can have only a short-lasting effect that occurs only as long as a vibration stimulus is present. 2. Short-term use of vibration in treatment does not impair proprioception and seems to be safe for patients with proprioceptive impairment. 3. These results need to be supplemented with an assessment of proprioception during the application of vibration stimuli. Additionally, the impact of vibration parameters used in the exercises should be evaluated.

Keywords: balance, joint position sense, proprioception, whole body vibration

Procedia PDF Downloads 324
1062 Structural Strength Evaluation and Wear Prediction of Double Helix Steel Wire Ropes for Heavy Machinery

Authors: Krunal Thakar

Abstract:

Wire ropes combine high tensile strength and flexibility as compared to other general steel products. They are used in various application areas such as cranes, mining, elevators, bridges, cable cars, etc. The earliest reported use of wire ropes was for mining hoist application in 1830s. Over the period, there have been substantial advancement in the design of wire ropes for various application areas. Under operational conditions, wire ropes are subjected to varying tensile loads and bending loads resulting in material wear and eventual structural failure due to fretting fatigue. The conventional inspection methods to determine wire failure is only limited to outer wires of rope. However, till date, there is no effective mathematical model to examine the inter wire contact forces and wear characteristics. The scope of this paper is to present a computational simulation technique to evaluate inter wire contact forces and wear, which are in many cases responsible for rope failure. Two different type of ropes, IWRC-6xFi(29) and U3xSeS(48) were taken for structural strength evaluation and wear prediction. Both ropes have a double helix twisted wire profile as per JIS standards and are mainly used in cranes. CAD models of both ropes were developed in general purpose design software using in house developed formulation to generate double helix profile. Numerical simulation was done under two different load cases (a) Axial Tension and (b) Bending over Sheave. Different parameters such as stresses, contact forces, wear depth, load-elongation, etc., were investigated and compared between both ropes. Numerical simulation method facilitates the detailed investigation of inter wire contact and wear characteristics. In addition, various selection parameters like sheave diameter, rope diameter, helix angle, swaging, maximum load carrying capacity, etc., can be quickly analyzed.

Keywords: steel wire ropes, numerical simulation, material wear, structural strength, axial tension, bending over sheave

Procedia PDF Downloads 149
1061 Gendered Mobility: Deep Distributions in Urban Transport Systems in Delhi

Authors: Nidhi Prabha

Abstract:

Transportation as a sector is one of the most significant infrastructural elements of the ‘urban.' The distinctness of an urban life in a city is marked by the dynamic movements that it enables within the city-space. Therefore it is important to study the public-transport systems that enable and foster mobility which characterizes the urban. It is also crucial to underscore the way one is examining the urban transport systems - either as an infrastructural unit in a strict physical-structural sense or as a structural unit which acts as a prism refracting multiple experiences depending on the location of the ‘commuter.' In the proposed paper, the attempt is to uncover and investigate the assumption of the neuter-commuter by looking at urban transportation in the secondary sense i.e. as a structural unit which is experienced differently by different kinds of commuters, thus making transportation deeply distributed with various social structures and locations like class or gender which map onto the transport systems. To this end, the public-transit systems operating in Urban Delhi i.e. the Delhi Metros and the Delhi Transport Corporation run public-buses are looked at as case studies. The study is premised on the knowledge and data gained from both primary and secondary sources. Primary sources include data and knowledge collected from fieldwork, the methodology for which has ranged from adopting ‘mixed-methods’ which is ‘Qualitative-then-Quantitative’ as well as borrowing ethnographic techniques. Apart from fieldwork, other primary sources looked at including Annual Reports and policy documents of the Delhi Metro Rail Corporation (DMRC) and the Delhi Transport Corporation (DTC), Union and Delhi budgets, Economic Survey of Delhi, press releases, etc. Secondary sources include the vast array of literature available on the critical nodes that inform the research like gender, transport geographies, urban-space, etc. The study indicates a deeply-distributed urban transport system wherein the various social-structural locations or different kinds of commuters map onto the way these different commuters experience mobility or movement within the city space. Mobility or movement, therefore, becomes gendered or has class-based ramifications. The neuter-commuter assumption is thus challenged. Such an understanding enables us to challenge the anonymity which the ‘urban’ otherwise claims it provides over the rural. The rural is opposed to the urban wherein urban ushers a modern way of life, breaking ties of traditional social identities. A careful study of the transport systems through the traveling patterns and choices of the commuters, however, indicate that this does not hold true as even the same ‘public-space’ of the transport systems allocates different places to different kinds of commuters. The central argument made though the research done is therefore that infrastructure like urban-transport-systems has to be studied and examined as seen beyond just a physical structure. The various experiences of daily mobility of different kinds of commuters have to be taken into account in order to design and plan more inclusive transport systems.

Keywords: gender, infrastructure, mobility, urban-transport-systems

Procedia PDF Downloads 216
1060 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest

Procedia PDF Downloads 117
1059 Irradion: Portable Small Animal Imaging and Irradiation Unit

Authors: Josef Uher, Jana Boháčová, Richard Kadeřábek

Abstract:

In this paper, we present a multi-robot imaging and irradiation research platform referred to as Irradion, with full capabilities of portable arbitrary path computed tomography (CT). Irradion is an imaging and irradiation unit entirely based on robotic arms for research on cancer treatment with ion beams on small animals (mice or rats). The platform comprises two subsystems that combine several imaging modalities, such as 2D X-ray imaging, CT, and particle tracking, with precise positioning of a small animal for imaging and irradiation. Computed Tomography: The CT subsystem of the Irradion platform is equipped with two 6-joint robotic arms that position a photon counting detector and an X-ray tube independently and freely around the scanned specimen and allow image acquisition utilizing computed tomography. Irradiation measures nearly all conventional 2D and 3D trajectories of X-ray imaging with precisely calibrated and repeatable geometrical accuracy leading to a spatial resolution of up to 50 µm. In addition, the photon counting detectors allow X-ray photon energy discrimination, which can suppress scattered radiation, thus improving image contrast. It can also measure absorption spectra and recognize different materials (tissue) types. X-ray video recording and real-time imaging options can be applied for studies of dynamic processes, including in vivo specimens. Moreover, Irradion opens the door to exploring new 2D and 3D X-ray imaging approaches. We demonstrate in this publication various novel scan trajectories and their benefits. Proton Imaging and Particle Tracking: The Irradion platform allows combining several imaging modules with any required number of robots. The proton tracking module comprises another two robots, each holding particle tracking detectors with position, energy, and time-sensitive sensors Timepix3. Timepix3 detectors can track particles entering and exiting the specimen and allow accurate guiding of photon/ion beams for irradiation. In addition, quantifying the energy losses before and after the specimen brings essential information for precise irradiation planning and verification. Work on the small animal research platform Irradion involved advanced software and hardware development that will offer researchers a novel way to investigate new approaches in (i) radiotherapy, (ii) spectral CT, (iii) arbitrary path CT, (iv) particle tracking. The robotic platform for imaging and radiation research developed for the project is an entirely new product on the market. Preclinical research systems with precision robotic irradiation with photon/ion beams combined with multimodality high-resolution imaging do not exist currently. The researched technology can potentially cause a significant leap forward compared to the current, first-generation primary devices.

Keywords: arbitrary path CT, robotic CT, modular, multi-robot, small animal imaging

Procedia PDF Downloads 85
1058 Exoskeleton Response During Infant Physiological Knee Kinematics And Dynamics

Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno

Abstract:

Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.

Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics

Procedia PDF Downloads 77
1057 Coherent Optical Tomography Imaging of Epidermal Hyperplasia in Vivo in a Mouse Model of Oxazolone Induced Atopic Dermatitis

Authors: Eric Lacoste

Abstract:

Laboratory animals are currently widely used as a model of human pathologies in dermatology such as atopic dermatitis (AD). These models provide a better understanding of the pathophysiology of this complex and multifactorial disease, the discovery of potential new therapeutic targets and the testing of the efficacy of new therapeutics. However, confirmation of the correct development of AD is mainly based on histology from skin biopsies requiring invasive surgery or euthanasia of the animals, plus slicing and staining protocols. However, there are currently accessible imaging technologies such as Optical Coherence Tomography (OCT), which allows non-invasive visualization of the main histological structures of the skin (like stratum corneum, epidermis, and dermis) and assessment of the dynamics of the pathology or efficacy of new treatments. Briefly, female immunocompetent hairless mice (SKH1 strain) were sensitized and challenged topically on back and ears for about 4 weeks. Back skin and ears thickness were measured using calliper at 3 occasions per week in complement to a macroscopic evaluation of atopic dermatitis lesions on back: erythema, scaling and excoriations scoring. In addition, OCT was performed on the back and ears of animals. OCT allows a virtual in-depth section (tomography) of the imaged organ to be made using a laser, a camera and image processing software allowing fast, non-contact and non-denaturing acquisitions of the explored tissues. To perform the imaging sessions, the animals were anesthetized with isoflurane, placed on a support under the OCT for a total examination time of 5 to 10 minutes. The results show a good correlation of the OCT technique with classical HES histology for skin lesions structures such as hyperkeratosis, epidermal hyperplasia, and dermis thickness. This OCT imaging technique can, therefore, be used in live animals at different times for longitudinal evaluation by repeated measurements of lesions in the same animals, in addition to the classical histological evaluation. Furthermore, this original imaging technique speeds up research protocols, reduces the number of animals and refines the use of the laboratory animal.

Keywords: atopic dermatitis, mouse model, oxzolone model, histology, imaging

Procedia PDF Downloads 128
1056 Understanding Help Seeking among Black Women with Clinically Significant Posttraumatic Stress Symptoms

Authors: Glenda Wrenn, Juliet Muzere, Meldra Hall, Allyson Belton, Kisha Holden, Chanita Hughes-Halbert, Martha Kent, Bekh Bradley

Abstract:

Understanding the help seeking decision making process and experiences of health disparity populations with posttraumatic stress disorder (PTSD) is central to development of trauma-informed, culturally centered, and patient focused services. Yet, little is known about the decision making process among adult Black women who are non-treatment seekers as they are, by definition, not engaged in services. Methods: Audiotaped interviews were conducted with 30 African American adult women with clinically significant PTSD symptoms who were engaged in primary care, but not in treatment for PTSD despite symptom burden. A qualitative interview guide was used to elucidate key themes. Independent coding of themes mapped to theory and identification of emergent themes were conducted using qualitative methods. An existing quantitative dataset was analyzed to contextualize responses and provide a descriptive summary of the sample. Results: Emergent themes revealed that active mental avoidance, the intermittent nature of distress, ambivalence, and self-identified resilience as undermining to help seeking decisions. Participants were stuck within the help-seeking phase of ‘recognition’ of illness and retained a sense of “it is my decision” despite endorsing significant social and environmental negative influencers. Participants distinguished ‘help acceptance’ from ‘help seeking’ with greater willingness to accept help and importance placed on being of help to others. Conclusions: Elucidation of the decision-making process from the perspective of non-treatment seekers has implications for outreach and treatment within models of integrated and specialty systems care. The salience of responses to trauma symptoms and stagnation in the help seeking recognition phase are findings relevant to integrated care service design and community engagement.

Keywords: culture, help-seeking, integrated care, PTSD

Procedia PDF Downloads 231
1055 Wood as a Climate Buffer in a Supermarket

Authors: Kristine Nore, Alexander Severnisen, Petter Arnestad, Dimitris Kraniotis, Roy Rossebø

Abstract:

Natural materials like wood, absorb and release moisture. Thus wood can buffer indoor climate. When used wisely, this buffer potential can be used to counteract the outer climate influence on the building. The mass of moisture used in the buffer is defined as the potential hygrothermal mass, which can be an energy storage in a building. This works like a natural heat pump, where the moisture is active in damping the diurnal changes. In Norway, the ability of wood as a material used for climate buffering is tested in several buildings with the extensive use of wood, including supermarkets. This paper defines the potential of hygrothermal mass in a supermarket building. This includes the chosen ventilation strategy, and how the climate impact of the building is reduced. The building is located above the arctic circle, 50m from the coastline, in Valnesfjord. It was built in 2015, has a shopping area, including toilet and entrance, of 975 m². The climate of the area is polar according to the Köppen classification, but the supermarket still needs cooling on hot summer days. In order to contribute to the total energy balance, wood needs dynamic influence to activate its hygrothermal mass. Drying and moistening of the wood are energy intensive, and this energy potential can be exploited. Examples are to use solar heat for drying instead of heating the indoor air, and raw air with high enthalpy that allow dry wooden surfaces to absorb moisture and release latent heat. Weather forecasts are used to define the need for future cooling or heating. Thus, the potential energy buffering of the wood can be optimized with intelligent ventilation control. The ventilation control in Valnesfjord includes the weather forecast and historical data. That is a five-day forecast and a two-day history. This is to prevent adjustments to smaller weather changes. The ventilation control has three zones. During summer, the moisture is retained to dampen for solar radiation through drying. In the winter time, moist air let into the shopping area to contribute to the heating. When letting the temperature down during the night, the moisture absorbed in the wood slow down the cooling. The ventilation system is shut down during closing hours of the supermarket in this period. During the autumn and spring, a regime of either storing the moisture or drying out to according to the weather prognoses is defined. To ensure indoor climate quality, measurements of CO₂ and VOC overrule the low energy control if needed. Verified simulations of the Valnesfjord building will build a basic model for investigating wood as a climate regulating material also in other climates. Future knowledge on hygrothermal mass potential in materials is promising. When including the time-dependent buffer capacity of materials, building operators can achieve optimal efficiency of their ventilation systems. The use of wood as a climate regulating material, through its potential hygrothermal mass and connected to weather prognoses, may provide up to 25% energy savings related to heating, cooling, and ventilation of a building.

Keywords: climate buffer, energy, hygrothermal mass, ventilation, wood, weather forecast

Procedia PDF Downloads 208
1054 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning

Procedia PDF Downloads 139
1053 Autosomal Dominant Polycystic Kidney Patients May Be Predisposed to Various Cardiomyopathies

Authors: Fouad Chebib, Marie Hogan, Ziad El-Zoghby, Maria Irazabal, Sarah Senum, Christina Heyer, Charles Madsen, Emilie Cornec-Le Gall, Atta Behfar, Barbara Ehrlich, Peter Harris, Vicente Torres

Abstract:

Background: Mutations in PKD1 and PKD2, the genes encoding the proteins polycystin-1 (PC1) and polycystin-2 (PC2) cause autosomal dominant polycystic kidney disease (ADPKD). ADPKD is a systemic disease associated with several extrarenal manifestations. Animal models have suggested an important role for the polycystins in cardiovascular function. The aim of the current study is to evaluate the association of various cardiomyopathies in a large cohort of patients with ADPKD. Methods: Clinical data was retrieved from medical records for all patients with ADPKD and cardiomyopathies (n=159). Genetic analysis was performed on available DNA by direct sequencing. Results: Among the 58 patients included in this case series, 39 patients had idiopathic dilated cardiomyopathy (IDCM), 17 had hypertrophic obstructive cardiomyopathy (HOCM), and 2 had left ventricular noncompaction (LVNC). The mean age at cardiomyopathy diagnosis was 53.3, 59.9 and 53.5 years in IDCM, HOCM and LVNC patients respectively. The median left ventricular ejection fraction at initial diagnosis of IDCM was 25%. Average basal septal thickness was 19.9 mm in patients with HOCM. Genetic data was available in 19, 8 and 2 cases of IDCM, HOCM, and LVNC respectively. PKD1 mutations were detected in 47.4%, 62.5% and 100% of IDCM, HOCM and LVNC cases. PKD2 mutations were detected only in IDCM cases and were overrepresented (36.8%) relative to the expected frequency in ADPKD (~15%). The prevalence of IDCM, HOCM, and LVNC in our ADPKD clinical cohort was 1:17, 1:39 and 1:333 respectively. When compared to the general population, IDCM and HOCM was approximately 10-fold more prevalent in patients with ADPKD. Conclusions: In summary, we suggest that PKD1 or PKD2 mutations may predispose to idiopathic dilated or hypertrophic cardiomyopathy. There is a trend for patients with PKD2 mutations to develop the former and for patients with PKD1 mutations to develop the latter. Predisposition to various cardiomyopathies may be another extrarenal manifestation of ADPKD.

Keywords: autosomal dominant polycystic kidney (ADPKD), polycystic kidney disease, cardiovascular, cardiomyopathy, idiopathic dilated cardiomyopathy, hypertrophic cardiomyopathy, left ventricular noncompaction

Procedia PDF Downloads 304
1052 Monetary Policy and Assets Prices in Nigeria: Testing for the Direction of Relationship

Authors: Jameelah Omolara Yaqub

Abstract:

One of the main reasons for the existence of central bank is that it is believed that central banks have some influence on private sector decisions which will enable the Central Bank to achieve some of its objectives especially that of stable price and economic growth. By the assumption of the New Keynesian theory that prices are fully flexible in the short run, the central bank can temporarily influence real interest rate and, therefore, have an effect on real output in addition to nominal prices. There is, therefore, the need for the Central Bank to monitor, respond to, and influence private sector decisions appropriately. This thus shows that the Central Bank and the private sector will both affect and be affected by each other implying considerable interdependence between the sectors. The interdependence may be simultaneous or not depending on the level of information, readily available and how sensitive prices are to agents’ expectations about the future. The aim of this paper is, therefore, to determine whether the interdependence between asset prices and monetary policy are simultaneous or not and how important is this relationship. Studies on the effects of monetary policy have largely used VAR models to identify the interdependence but most have found small effects of interaction. Some earlier studies have ignored the possibility of simultaneous interdependence while those that have allowed for simultaneous interdependence used data from developed economies only. This study, therefore, extends the literature by using data from a developing economy where information might not be readily available to influence agents’ expectation. In this study, the direction of relationship among variables of interest will be tested by carrying out the Granger causality test. Thereafter, the interaction between asset prices and monetary policy in Nigeria will be tested. Asset prices will be represented by the NSE index as well as real estate prices while monetary policy will be represented by money supply and the MPR respectively. The VAR model will be used to analyse the relationship between the variables in order to take account of potential simultaneity of interdependence. The study will cover the period between 1980 and 2014 due to data availability. It is believed that the outcome of the research will guide monetary policymakers especially the CBN to effectively influence the private sector decisions and thereby achieve its objectives of price stability and economic growth.

Keywords: asset prices, granger causality, monetary policy rate, Nigeria

Procedia PDF Downloads 212
1051 Ranking Theory-The Paradigm Shift in Statistical Approach to the Issue of Ranking in a Sports League

Authors: E. Gouya Bozorg

Abstract:

The issue of ranking of sports teams, in particular soccer teams is of primary importance in the professional sports. However, it is still based on classical statistics and models outside of area of mathematics. Rigorous mathematics and then statistics despite the expectation held of them have not been able to effectively engage in the issue of ranking. It is something that requires serious pathology. The purpose of this study is to change the approach to get closer to mathematics proper for using in the ranking. We recommend using theoretical mathematics as a good option because it can hermeneutically obtain the theoretical concepts and criteria needful for the ranking from everyday language of a League. We have proposed a framework that puts the issue of ranking into a new space that we have applied in soccer as a case study. This is an experimental and theoretical study on the issue of ranking in a professional soccer league based on theoretical mathematics, followed by theoretical statistics. First, we showed the theoretical definition of constant number Є = 1.33 or ‘golden number’ of a soccer league. Then, we have defined the ‘efficiency of a team’ by this number and formula of μ = (Pts / (k.Є)) – 1, in which Pts is a point obtained by a team in k number of games played. Moreover, K.Є index has been used to show the theoretical median line in the league table and to compare top teams and bottom teams. Theoretical coefficient of σ= 1 / (1+ (Ptx / Ptxn)) has also been defined that in every match between the teams x, xn, with respect to the ability of a team and the points of both of them Ptx, Ptxn, and it gives a performance point resulting in a special ranking for the League. And it has been useful particularly in evaluating the performance of weaker teams. The current theory has been examined for the statistical data of 4 major European Leagues during the period of 1998-2014. Results of this study showed that the issue of ranking is dependent on appropriate theoretical indicators of a League. These indicators allowed us to find different forms of ranking of teams in a league including the ‘special table’ of a league. Furthermore, on this basis the issue of a record of team has been revised and amended. In addition, the theory of ranking can be used to compare and classify the different leagues and tournaments. Experimental results obtained from archival statistics of major professional leagues in the world in the past two decades have confirmed the theory. This topic introduces a new theory for ranking of a soccer league. Moreover, this theory can be used to compare different leagues and tournaments.

Keywords: efficiency of a team, ranking, special table, theoretical mathematic

Procedia PDF Downloads 415
1050 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 387
1049 Application and Evaluation of Teaching-Learning Guides Based on Swebok for the Requirements Engineering Area

Authors: Mauro Callejas-Cuervo, Andrea Catherine Alarcon-Aldana, Lorena Paola Castillo-Guerra

Abstract:

The software industry requires highly-trained professionals, capable of developing the roles integrated in the cycle of software development. That is why a large part of the task is the responsibility of higher education institutions; often through a curriculum established to orientate the academic development of the students. It is so that nowadays there are different models that support proposals for the improvement of the curricula for the area of Software Engineering, such as ACM, IEEE, ABET, Swebok, of which the last stands out, given that it manages and organises the knowledge of Software Engineering and offers a vision of theoretical and practical aspects. Moreover, it has been applied by different universities in the pursuit of achieving coverage in delivering the different topics and increasing the professional quality of future graduates. This research presents the structure of teaching and learning guides from the objectives of training and methodological strategies immersed in the levels of learning of Bloom’s taxonomy with which it is intended to improve the delivery of the topics in the area of Requirements Engineering. Said guides were implemented and validated in a course of Requirements Engineering of the Systems and Computer Engineering programme in the Universidad Pedagógica y Tecnológica de Colombia (Pedagogical and Technological University of Colombia) using a four stage methodology: definition of the evaluation model, implementation of the guides, guide evaluation, and analysis of the results. After the collection and analysis of the data, the results show that in six out of the seven topics proposed in the Swebok guide, the percentage of students who obtained total marks within the 'High grade' level, that is between 4.0 and 4.6 (on a scale of 0.0 to 5.0), was higher than the percentage of students who obtained marks within the 'Acceptable' range of 3.0 to 3.9. In 86% of the topics and the strategies proposed, the teaching and learning guides facilitated the comprehension, analysis, and articulation of the concepts and processes of the students. In addition, they mainly indicate that the guides strengthened the argumentative and interpretative competencies, while the remaining 14% denotes the need to reinforce the strategies regarding the propositive competence, given that it presented the lowest average.

Keywords: pedagogic guide, pedagogic strategies, requirements engineering, Swebok, teaching-learning process

Procedia PDF Downloads 282
1048 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification

Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti

Abstract:

Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.

Keywords: fluvial auto-classification concept, mapping, geomorphology, river

Procedia PDF Downloads 364
1047 Observationally Constrained Estimates of Aerosol Indirect Radiative Forcing over Indian Ocean

Authors: Sofiya Rao, Sagnik Dey

Abstract:

Aerosol-cloud-precipitation interaction continues to be one of the largest sources of uncertainty in quantifying the aerosol climate forcing. The uncertainty is increasing from global to regional scale. This problem remains unresolved due to the large discrepancy in the representation of cloud processes in the climate models. Most of the studies on aerosol-cloud-climate interaction and aerosol-cloud-precipitation over Indian Ocean (like INDOEX, CAIPEEX campaign etc.) are restricted to either particular to one season or particular to one region. Here we developed a theoretical framework to quantify aerosol indirect radiative forcing using Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol and cloud products of 15 years (2000-2015) period over the Indian Ocean. This framework relies on the observationally constrained estimate of the aerosol-induced change in cloud albedo. We partitioned the change in cloud albedo into the change in Liquid Water Path (LWP) and Effective Radius of Clouds (Reff) in response to an aerosol optical depth (AOD). Cloud albedo response to an increase in AOD is most sensitive in the range of LWP between 120-300 gm/m² for a range of Reff varying from 8-24 micrometer, which means aerosols are most sensitive to this range of LWP and Reff. Using this framework, aerosol forcing during a transition from indirect to semi-direct effect is also calculated. The outcome of this analysis shows best results over the Arabian Sea in comparison with the Bay of Bengal and the South Indian Ocean because of heterogeneity in aerosol spices over the Arabian Sea. Over the Arabian Sea during Winter Season the more absorbing aerosols are dominating, during Pre-monsoon dust (coarse mode aerosol particles) are more dominating. In winter and pre-monsoon majorly the aerosol forcing is more dominating while during monsoon and post-monsoon season meteorological forcing is more dominating. Over the South Indian Ocean, more or less same types of aerosol (Sea salt) are present. Over the Arabian Sea the Aerosol Indirect Radiative forcing are varying from -5 ± 4.5 W/m² for winter season while in other seasons it is reducing. The results provide observationally constrained estimates of aerosol indirect forcing in the Indian Ocean which can be helpful in evaluating the climate model performance in the context of such complex interactions.

Keywords: aerosol-cloud-precipitation interaction, aerosol-cloud-climate interaction, indirect radiative forcing, climate model

Procedia PDF Downloads 169
1046 Numerical Investigation of Embankments for Protecting Rock Fall

Authors: Gökhan Altay, Cafer Kayadelen

Abstract:

Rock fall is a movement of huge rock blocks from dip slopes due to physical effects. It generally occurs where loose tuffs lying under basalt flow or stringcourse is being constituted by limestone layers which stand on clay. By corrosion of some parts, big cracks occur on layers and these cracks continue to grow with the effect of freezing-thawing. In this way, the breaking rocks fall down from these dip slopes. Earthquakes which can induce lots of rock movements is another reason for rock fall events. In Turkey, we have a large number of regions prone to the earthquake as in the World so this increases the possibility of rock fall events. A great number of rock fall events take place in Turkey as in the World every year. The rock fall events occurring in urban areas cause serious damages in houses, roads and workplaces. Sometimes it also hinders transportation and furthermore it maybe kills people. In Turkey, rock fall events happen mostly in Spring and Winter because of freezing- thawing of water in rock cracks frequently. In mountain and inclined areas, rock fall is risky for engineering construction and environment. Some countries can invest significant money for these risky areas. For instance, in Switzerland, approximately 6.7 million dollars is spent annually for a distance of 4 km, to the systems to prevent rock fall events. In Turkey, we have lots of urban areas and engineering structure that have the rock fall risk. The embankments are preferable for rock fall events because of its low maintenance and repair costs. Also, embankments are able to absorb much more energy according to other protection systems. The current design method of embankments is only depended on field tests results so there are inadequate studies about this design method. In this paper, the field test modeled in three dimensions and analysis are carried out with the help of ANSYS programme. By the help of field test from literature the numerical model validated. After the validity of numerical models additional parametric studies performed. Changes in deformation of embankments are investigated by the changes in, geometry, velocity and impact height of falling rocks.

Keywords: ANSYS, embankment, impact height, numerical analysis, rock fall

Procedia PDF Downloads 507
1045 Preventive Effects of Motorcycle Helmets on Clinical Outcomes in Motorcycle Crashes

Authors: Seung Chul Lee, Jooyeong Kim, Ki Ok Ahn, Juok Park

Abstract:

Background: Injuries caused by motorcycle crashes are one of the major public health burdens leading to high mortality, functional disability. The risk of death among motorcyclists is 30 times greater than that among car drivers, with head injuries the leading cause of death. The motorcycle helmet is crucial protective equipment for motorcyclists. Aims: This study aimed to measure the protective effect of motorcycle helmet use on intracranial injury and mortality and to compare the preventive effect in drivers and passengers. Methods: This is a cross-sessional study based on the Emergency Department (ED)–based Injury In-depth Surveillance (EDIIS) database from 23 EDs in Korea. All of the trauma patients injured in motorcycle crashes between January 1, 2013 and December 31, 2016 were eligible, excluding cases with unknown helmet use and outcomes. The primary and secondary outcomes were intracranial injury and in-hospital mortality. We calculated adjusted odds ratios (AORs) of helmet use for study outcomes after adjusting for potential confounders. Using interaction models, we compared the protective effect of helmet use on outcomes across driving status (driver and passenger). Results: Among 17,791 eligible patients, 10,668 (60.0%) patients were wearing helmets at the time of the crash, 2,128 (12.0%) patients had intracranial injuries and 331 (1.9%) patients had in-hospital death. 16,381 (92.1%) patients were drivers and 1410 (7.9%) patients were passengers. 62.6% of drivers and 29.1% of passengers were wearing helmets at the time of the crash. Compared to un-helmeted group, the helmeted group was less likely to have an intracranial injury(8.0% vs. 17.9%, AOR: 0.43 (0.39-0.48)) and in-hospital mortality (1.0% vs. 3.2%, AOR: 0.29 (0.22-0.37)).In the interaction model, AORs (95% CIs) of helmet use for intracranial injury were 0.42 (0.38-0.47) in drivers and 0.61(0.41-0.90) in passengers, respectively. There was a significant preventive effect of helmet use on in-hospital mortality in drivers (AOR: 0.26(0.21–0.34)). Discussion and conclusions: Wearing helmets in motorcycle crashes reduced intracranial injuries and in-hospital mortality. The preventive effect of motorcycle helmet use on intracranial injury was stronger in drivers than in passengers. There was a significant preventive effect of helmet use on in-hospital mortality in driver but not in passengers. Public health efforts to increase motorcycle helmet use are needed to reduce health burden from injuries caused by motorcycle crashes.

Keywords: intracranial injury, helmet, mortality, motorcycle crashes

Procedia PDF Downloads 177
1044 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 129
1043 Effect of Forests and Forest Cover Change on Rainfall in the Central Rift Valley of Ethiopia

Authors: Alemayehu Muluneh, Saskia Keesstra, Leo Stroosnijder, Woldeamlak Bewket, Ashenafi Burka

Abstract:

There are some scientific evidences and a belief by many that forests attract rain and deforestation contributes to a decline of rainfall. However, there is still a lack of concrete scientific evidence on the role of forests in rainfall amount. In this paper, we investigate the forest-rainfall relationships in the environmentally hot spot area of the Central Rift Valley (CRV) of Ethiopia. Specifically, we evaluate long term (1970-2009) rainfall variability and its relationship with historical forest cover and the relationship between existing forest cover and topographical variables and rainfall distribution. The study used 16 long term and 15 short term rainfall stations. The Mann-Kendall test, bi variate and multiple regression models were used. The results show forest and wood land cover continuously declined over the 40 years period (1970-2009), but annual rainfall in the rift valley floor increased by 6.42 mm/year. But, on the escarpment and highlands, annual rainfall decreased by 2.48 mm/year. The increase in annual rainfall in the rift valley floor is partly attributable to the increase in evaporation as a result of increasing temperatures from the 4 existing lakes in the rift valley floor. Though, annual rainfall is decreasing on the escarpment and highlands, there was no significant correlation between this rainfall decrease and forest and wood land decline and also rainfall variability in the region was not explained by forest cover. Hence, the decrease in annual rainfall on the escarpment and highlands is likely related to the global warming of the atmosphere and the surface waters of the Indian Ocean. Spatial variability of number of rainy days from systematically observed two-year’s rainfall data (2012-2013) was significantly (R2=-0.63) explained by forest cover (distance from forest). But, forest cover was not a significant variable (R2=-0.40) in explaining annual rainfall amount. Generally, past deforestation and existing forest cover showed very little effect on long term and short term rainfall distribution, but a significant effect on number of rainy days in the CRV of Ethiopia.

Keywords: elevation, forest cover, rainfall, slope

Procedia PDF Downloads 540
1042 Assessment and Adaptation Strategy of Climate Change to Water Quality in the Erren River and Its Impact to Health

Authors: Pei-Chih Wu, Hsin-Chih Lai, Yung-Lung Lee, Yun-Yao Chi, Ching-Yi Horng, Hsien-Chang Wang

Abstract:

The impact of climate change to health has always been well documented. Amongst them, water-borne infectious diseases, chronic adverse effects or cancer risks due to chemical contamination in flooding or drought events are especially important in river basin. This study therefore utilizes GIS and different models to integrate demographic, land use, disaster prevention, social-economic factors, and human health assessment in the Erren River basin. Therefore, through the collecting of climatic, demographic, health surveillance, water quality and other water monitoring data, potential risks associated with the Erren River Basin are established and to understand human exposure and vulnerability in response to climate extremes. This study assesses the temporal and spatial patterns of melioidosis (2000-2015) and various cancer incidents in Tainan and Kaohsiung cities. The next step is to analyze the spatial association between diseases incidences, climatic factors, land uses, and other demographic factors by using ArcMap and GeoDa. The study results show that amongst all melioidosis cases in Taiwan, 24% cases (115) residence occurred in the Erren River basin. The relationship between the cases and in Tainan and Kaohsiung cities are associated with population density, aging indicator, and residence in Erren River basin. Risks from flooding due to heavy rainfall and fish farms in spatial lag regression are also related. Through liver cancer, the preliminary analysis in temporal and spatial pattern shows an increases pattern in annual incidence without clusters in Erren River basin. Further analysis of potential cancers connected to heavy metal contamination from water pollution in Erren River is established. The final step is to develop an assessment tool for human exposure from water contamination and vulnerability in response to climate extremes for the second year.

Keywords: climate change, health impact, health adaptation, Erren River Basin

Procedia PDF Downloads 299