Search results for: relative distance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4186

Search results for: relative distance

286 Examination of Indoor Air Quality of Naturally Ventilated Dwellings During Winters in Mega-City Kolkata

Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya

Abstract:

The US Environmental Protection Agency defines indoor air quality as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. The built environment can affect health directly and indirectly through immediate or long-term exposure to indoor air pollutants. Health effects associated with indoor air pollutants include eye/nose/throat irritation, respiratory diseases, heart disease, and even cancer. This study attempts to demonstrate the causal relationship between the indoor air quality and its determining aspects. Detailed indoor air quality audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavorable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses indoor air quality based on factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.

Keywords: indoor air quality, occupant health, urban housing, air pollution, natural ventilation, architecture, urban issues

Procedia PDF Downloads 118
285 Variability and Stability of Bread and Durum Wheat for Phytic Acid Content

Authors: Gordana Branković, Vesna Dragičević, Dejan Dodig, Desimir Knežević, Srbislav Denčić, Gordana Šurlan-Momirović

Abstract:

Phytic acid is a major pool in the flux of phosphorus through agroecosystems and represents a sum equivalent to > 50% of all phosphorus fertilizer used annually. Nutrition rich in phytic acid can substantially decrease micronutrients apsorption as calcium, zink, iron, manganese, copper due to phytate salts excretion by human and non-ruminant animals as poultry, swine and fish, having in common very scarce phytase activity, and consequently the ability to digest and utilize phytic acid, thus phytic acid derived phosphorus in animal waste contributes to water pollution. The tested accessions consisted of 15 genotypes of bread wheat (Triticum aestivum L. ssp. vulgare) and of 15 genotypes of durum wheat (Triticum durum Desf.). The trials were sown at the three test sites in Serbia: Rimski Šančevi (RS) (45º19´51´´N; 19º50´59´´E), Zemun Polje (ZP) (44º52´N; 20º19´E) and Padinska Skela (PS) (44º57´N 20º26´E) during two vegetation seasons 2010-2011 and 2011-2012. The experimental design was randomized complete block design with four replications. The elementary plot consisted of 3 internal rows of 0.6 m2 area (3 × 0.2 m × 1 m). Grains were grinded with Laboratory Mill 120 Perten (“Perten”, Sweden) (particles size < 500 μm) and flour was used for the analysis. Phytic acid grain content was determined spectrophotometrically with the Shimadzu UV-1601 spectrophotometer (Shimadzu Corporation, Japan). Objectives of this study were to determine: i) variability and stability of the phytic acid content among selected genotypes of bread and durum wheat, ii) predominant source of variation regarding genotype (G), environment (E) and genotype × environment interaction (GEI) from the multi-environment trial, iii) influence of climatic variables on the GEI for the phytic acid content. Based on the analysis of variance it had been determined that the variation of phytic acid content was predominantly influenced by environment in durum wheat, while the GEI prevailed for the variation of the phytic acid content in bread wheat. Phytic acid content expressed on the dry mass basis was in the range 14.21-17.86 mg g-1 with the average of 16.05 mg g-1 for bread wheat and 14.63-16.78 mg g-1 with the average of 15.91 mg g-1 for durum wheat. Average-environment coordination view of the genotype by environment (GGE) biplot was used for the selection of the most desirable genotypes for breeding for low phytic acid content in the sense of good stability and lower level of phytic acid content. The most desirable genotypes of bread and durum wheat for breeding for phytic acid were Apache and 37EDUYT /07 No. 7849. Models of climatic factors in the highest percentage (> 91%) were useful in interpreting GEI for phytic acid content, and included relative humidity in June, sunshine hours in April, mean temperature in April and winter moisture reserves for genotypes of bread wheat, as well as precipitation in June and April, maximum temperature in April and mean temperature in June for genotypes of durum wheat.

Keywords: genotype × environment interaction, phytic acid, stability, variability

Procedia PDF Downloads 386
284 Analyzing the Websites of Institutions Publishing Global Rankings of Universities: A Usability Study

Authors: Nuray Baltaci, Kursat Cagiltay

Abstract:

University rankings which can be seen as nouveau topic are at the center of focus and followed closely by different parties. Students are interested in university rankings in order to make informed decisions about the selection of their candidate future universities. University administrators and academicians can utilize them to see and evaluate their universities’ relative performance compared to other institutions in terms of including but not limited to academic, economic, and international outlook issues. Local institutions may use those ranking systems, as TUBITAK (The Scientific and Technological Research Council of Turkey) and YOK (Council of Higher Education) do in Turkey, to support students and give scholarships when they want to apply for undergraduate and graduate studies abroad. When it is considered that the ranking systems are concerned by this many different parties, the importance of having clear, easy to use and well-designed websites by ranking institutions will be apprehended. In this paper, a usability study for the websites of four different global university ranking institutions, namely Academic Ranking of World Universities (ARWU), Times Higher Education, QS and University Ranking by Academic Performance (URAP), was conducted. User-based approach was adopted and usability tests were conducted with 10 graduate students at Middle East Technical University in Ankara, Turkey. Before performing the formal usability tests, a pilot study had been completed to reflect the necessary changes to the settings of the study. Participants’ demographics, task completion times, paths traced to complete tasks, and their satisfaction levels on each task and website were collected. According to the analyses of the collected data, those ranking websites were compared in terms of efficiency, effectiveness and satisfaction dimensions of usability as pointed in ISO 9241-11. Results showed that none of the selected ranking websites is superior to other ones in terms of overall effectiveness and efficiency of the website. However the only remarkable result was that the highest average task completion times for two of the designed tasks belong to the Times Higher Education Rankings website. Evaluation of the user satisfaction on each task and each website produced slightly different but rather similar results. When the satisfaction levels of the participants on each task are examined, it was seen that the highest scores belong to ARWU and URAP websites. The overall satisfaction levels of the participants for each website showed that the URAP website has highest score followed by ARWU website. In addition, design problems and powerful design features of those websites reported by the participants are presented in the paper. Since the study mainly tackles about the design problems of the URAP website, the focus is on this website. Participants reported 3 main design problems about the website which are unaesthetic and unprofessional design style of the website, improper map location on ranking pages, and improper listing of the field names on field ranking page.

Keywords: university ranking, user-based approach, website usability, design

Procedia PDF Downloads 393
283 Improving Binding Selectivity in Molecularly Imprinted Polymers from Templates of Higher Biomolecular Weight: An Application in Cancer Targeting and Drug Delivery

Authors: Ben Otange, Wolfgang Parak, Florian Schulz, Michael Alexander Rubhausen

Abstract:

The feasibility of extending the usage of molecular imprinting technique in complex biomolecules is demonstrated in this research. This technique is promising in diverse applications in areas such as drug delivery, diagnosis of diseases, catalysts, and impurities detection as well as treatment of various complications. While molecularly imprinted polymers MIP remain robust in the synthesis of molecules with remarkable binding sites that have high affinities to specific molecules of interest, extending the usage to complex biomolecules remains futile. This work reports on the successful synthesis of MIP from complex proteins: BSA, Transferrin, and MUC1. We show in this research that despite the heterogeneous binding sites and higher conformational flexibility of the chosen proteins, relying on their respective epitopes and motifs rather than the whole template produces highly sensitive and selective MIPs for specific molecular binding. Introduction: Proteins are vital in most biological processes, ranging from cell structure and structural integrity to complex functions such as transport and immunity in biological systems. Unlike other imprinting templates, proteins have heterogeneous binding sites in their complex long-chain structure, which makes their imprinting to be marred by challenges. In addressing this challenge, our attention is inclined toward the targeted delivery, which will use molecular imprinting on the particle surface so that these particles may recognize overexpressed proteins on the target cells. Our goal is thus to make surfaces of nanoparticles that specifically bind to the target cells. Results and Discussions: Using epitopes of BSA and MUC1 proteins and motifs with conserved receptors of transferrin as the respective templates for MIPs, significant improvement in the MIP sensitivity to the binding of complex protein templates was noted. Through the Fluorescence Correlation Spectroscopy FCS measurements on the size of protein corona after incubation of the synthesized nanoparticles with proteins, we noted a high affinity of MIPs to the binding of their respective complex proteins. In addition, quantitative analysis of hard corona using SDS-PAGE showed that only a specific protein was strongly bound on the respective MIPs when incubated with similar concentrations of the protein mixture. Conclusion: Our findings have shown that the merits of MIPs can be extended to complex molecules of higher biomolecular mass. As such, the unique merits of the technique, including high sensitivity and selectivity, relative ease of synthesis, production of materials with higher physical robustness, and higher stability, can be extended to more templates that were previously not suitable candidates despite their abundance and usage within the body.

Keywords: molecularly imprinted polymers, specific binding, drug delivery, high biomolecular mass-templates

Procedia PDF Downloads 49
282 Simulation of Hydraulic Fracturing Fluid Cleanup for Partially Degraded Fracturing Fluids in Unconventional Gas Reservoirs

Authors: Regina A. Tayong, Reza Barati

Abstract:

A stable, fast and robust three-phase, 2D IMPES simulator has been developed for assessing the influence of; breaker concentration on yield stress of filter cake and broken gel viscosity, varying polymer concentration/yield stress along the fracture face, fracture conductivity, fracture length, capillary pressure changes and formation damage on fracturing fluid cleanup in tight gas reservoirs. This model has been validated as against field data reported in the literature for the same reservoir. A 2-D, two-phase (gas/water) fracture propagation model is used to model our invasion zone and create the initial conditions for our clean-up model by distributing 200 bbls of water around the fracture. A 2-D, three-phase IMPES simulator, incorporating a yield-power-law-rheology has been developed in MATLAB to characterize fluid flow through a hydraulically fractured grid. The variation in polymer concentration along the fracture is computed from a material balance equation relating the initial polymer concentration to total volume of injected fluid and fracture volume. All governing equations and the methods employed have been adequately reported to permit easy replication of results. The effect of increasing capillary pressure in the formation simulated in this study resulted in a 10.4% decrease in cumulative production after 100 days of fluid recovery. Increasing the breaker concentration from 5-15 gal/Mgal on the yield stress and fluid viscosity of a 200 lb/Mgal guar fluid resulted in a 10.83% increase in cumulative gas production. For tight gas formations (k=0.05 md), fluid recovery increases with increasing shut-in time, increasing fracture conductivity and fracture length, irrespective of the yield stress of the fracturing fluid. Mechanical induced formation damage combined with hydraulic damage tends to be the most significant. Several correlations have been developed relating pressure distribution and polymer concentration to distance along the fracture face and average polymer concentration variation with injection time. The gradient in yield stress distribution along the fracture face becomes steeper with increasing polymer concentration. The rate at which the yield stress (τ_o) is increasing is found to be proportional to the square of the volume of fluid lost to the formation. Finally, an improvement on previous results was achieved through simulating yield stress variation along the fracture face rather than assuming constant values because fluid loss to the formation and the polymer concentration distribution along the fracture face decreases as we move away from the injection well. The novelty of this three-phase flow model lies in its ability to (i) Simulate yield stress variation with fluid loss volume along the fracture face for different initial guar concentrations. (ii) Simulate increasing breaker activity on yield stress and broken gel viscosity and the effect of (i) and (ii) on cumulative gas production within reasonable computational time.

Keywords: formation damage, hydraulic fracturing, polymer cleanup, multiphase flow numerical simulation

Procedia PDF Downloads 125
281 Association of Temperature Factors with Seropositive Results against Selected Pathogens in Dairy Cow Herds from Central and Northern Greece

Authors: Marina Sofia, Alexios Giannakopoulos, Antonia Touloudi, Dimitris C Chatzopoulos, Zoi Athanasakopoulou, Vassiliki Spyrou, Charalambos Billinis

Abstract:

Fertility of dairy cattle can be affected by heat stress when the ambient temperature increases above 30°C and the relative humidity ranges from 35% to 50%. The present study was conducted on dairy cattle farms during summer months in Greece and aimed to identify the serological profile against pathogens that could affect fertility and to associate the positive serological results at herd level with temperature factors. A total of 323 serum samples were collected from clinically healthy dairy cows of 8 herds, located in Central and Northern Greece. ELISA tests were performed to detect antibodies against selected pathogens that affect fertility, namely Chlamydophila abortus, Coxiella burnetii, Neospora caninum, Toxoplasma gondii and Infectious Bovine Rhinotracheitis Virus (IBRV). Eleven climatic variables were derived from the WorldClim version 1.4. and ArcGIS V.10.1 software was used for analysis of the spatial information. Five different MaxEnt models were applied to associate the temperature variables with the locations of seropositive Chl. abortus, C. burnetii, N. caninum, T. gondii and IBRV herds (one for each pathogen). The logistic outputs were used for the interpretation of the results. ROC analyses were performed to evaluate the goodness of fit of the models’ predictions. Jackknife tests were used to identify the variables with a substantial contribution to each model. The seropositivity rates of pathogens varied among the 8 herds (0.85-4.76% for Chl. abortus, 4.76-62.71% for N. caninum, 3.8-43.47% for C. burnetii, 4.76-39.28% for T. gondii and 47.83-78.57% for IBRV). The variables of annual temperature range, mean diurnal range and maximum temperature of the warmest month gave a contribution to all five models. The regularized training gains, the training AUCs and the unregularized training gains were estimated. The mean diurnal range gave the highest gain when used in isolation and decreased the gain the most when it was omitted in the two models for seropositive Chl.abortus and IBRV herds. The annual temperature range increased the gain when used alone and decreased the gain the most when it was omitted in the models for seropositive C. burnetii, N. caninum and T. gondii herds. In conclusion, antibodies against Chl. abortus, C. burnetii, N. caninum, T. gondii and IBRV were detected in most herds suggesting circulation of pathogens that could cause infertility. The results of the spatial analyses demonstrated that the annual temperature range, mean diurnal range and maximum temperature of the warmest month could affect positively the possible pathogens’ presence. Acknowledgment: This research has been co‐financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH–CREATE–INNOVATE (project code: T1EDK-01078).

Keywords: dairy cows, seropositivity, spatial analysis, temperature factors

Procedia PDF Downloads 191
280 Impact of Ecosystem Engineers on Soil Structuration in a Restored Floodplain in Switzerland

Authors: Andreas Schomburg, Claire Le Bayon, Claire Guenat, Philip Brunner

Abstract:

Numerous river restoration projects have been established in Switzerland in recent years after decades of human activity in floodplains. The success of restoration projects in terms of biodiversity and ecosystem functions largely depend on the development of the floodplain soil system. Plants and earthworms as ecosystem engineers are known to be able to build up a stable soil structure by incorporating soil organic matter into the soil matrix that creates water stable soil aggregates. Their engineering efficiency however largely depends on changing soil properties and frequent floods along an evolutive floodplain transect. This study, therefore, aims to quantify the effect of flood frequency and duration as well as of physico-chemical soil parameters on plants’ and earthworms’ engineering efficiency. It is furthermore predicted that these influences may have a different impact on one of the engineers that leads to a varying contribution to aggregate formation within the floodplain transect. Ecosystem engineers were sampled and described in three different floodplain habitats differentiated according to the evolutionary stages of the vegetation ranging from pioneer to forest vegetation in a floodplain restored 15 years ago. In addition, the same analyses were performed in an embanked adjacent pasture as a reference for the pre-restored state. Soil aggregates were collected and analyzed for their organic matter quantity and quality using Rock Eval pyrolysis. Water level and discharge measurements dating back until 2008 were used to quantify the return period of major floods. Our results show an increasing amount of water stable aggregates in soil with increasing distance to the river and show largest values in the reference site. A decreasing flood frequency and the proportion of silt and clay in the soil texture explain these findings according to F values from one way ANOVA of a fitted mixed effect model. Significantly larger amounts of labile organic matter signatures were found in soil aggregates in the forest habitat and in the reference site that indicates a larger contribution of plants to soil aggregation in these habitats compared to the pioneer vegetation zone. Earthworms’ contribution to soil aggregation does not show significant differences in the floodplain transect, but their effect could be identified even in the pioneer vegetation with its large proportion of coarse sand in the soil texture and frequent inundations. These findings indicate that ecosystem engineers seem to be able to create soil aggregates even under unfavorable soil conditions and under frequent floods. A restoration success can therefore be expected even in ecosystems with harsh soil properties and frequent external disturbances.

Keywords: ecosystem engineers, flood frequency, floodplains, river restoration, rock eval pyrolysis, soil organic matter incorporation, soil structuration

Procedia PDF Downloads 264
279 Identification of Clinical Characteristics from Persistent Homology Applied to Tumor Imaging

Authors: Eashwar V. Somasundaram, Raoul R. Wadhwa, Jacob G. Scott

Abstract:

The use of radiomics in measuring geometric properties of tumor images such as size, surface area, and volume has been invaluable in assessing cancer diagnosis, treatment, and prognosis. In addition to analyzing geometric properties, radiomics would benefit from measuring topological properties using persistent homology. Intuitively, features uncovered by persistent homology may correlate to tumor structural features. One example is necrotic cavities (corresponding to 2D topological features), which are markers of very aggressive tumors. We develop a data pipeline in R that clusters tumors images based on persistent homology is used to identify meaningful clinical distinctions between tumors and possibly new relationships not captured by established clinical categorizations. A preliminary analysis was performed on 16 Magnetic Resonance Imaging (MRI) breast tissue segments downloaded from the 'Investigation of Serial Studies to Predict Your Therapeutic Response with Imaging and Molecular Analysis' (I-SPY TRIAL or ISPY1) collection in The Cancer Imaging Archive. Each segment represents a patient’s breast tumor prior to treatment. The ISPY1 dataset also provided the estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) status data. A persistent homology matrix up to 2-dimensional features was calculated for each of the MRI segmentation. Wasserstein distances were then calculated between all pairwise tumor image persistent homology matrices to create a distance matrix for each feature dimension. Since Wasserstein distances were calculated for 0, 1, and 2-dimensional features, three hierarchal clusters were constructed. The adjusted Rand Index was used to see how well the clusters corresponded to the ER/PR/HER2 status of the tumors. Triple-negative cancers (negative status for all three receptors) significantly clustered together in the 2-dimensional features dendrogram (Adjusted Rand Index of .35, p = .031). It is known that having a triple-negative breast tumor is associated with aggressive tumor growth and poor prognosis when compared to non-triple negative breast tumors. The aggressive tumor growth associated with triple-negative tumors may have a unique structure in an MRI segmentation, which persistent homology is able to identify. This preliminary analysis shows promising results in the use of persistent homology on tumor imaging to assess the severity of breast tumors. The next step is to apply this pipeline to other tumor segment images from The Cancer Imaging Archive at different sites such as the lung, kidney, and brain. In addition, whether other clinical parameters, such as overall survival, tumor stage, and tumor genotype data are captured well in persistent homology clusters will be assessed. If analyzing tumor MRI segments using persistent homology consistently identifies clinical relationships, this could enable clinicians to use persistent homology data as a noninvasive way to inform clinical decision making in oncology.

Keywords: cancer biology, oncology, persistent homology, radiomics, topological data analysis, tumor imaging

Procedia PDF Downloads 129
278 Assessing the High Rate of Deforestation Caused by the Operations of Timber Industries in Ghana

Authors: Obed Asamoah

Abstract:

Forests are very vital for human survival and our well-being. During the past years, the world has taken an increasingly significant role in the modification of the global environment. The high rate of deforestation in Ghana is of primary national concern as the forests provide many ecosystem services and functions that support the country’s predominantly agrarian economy and foreign earnings. Ghana forest is currently major source of carbon sink that helps to mitigate climate change. Ghana forests, both the reserves and off-reserves, are under pressure of deforestation. The causes of deforestation are varied but can broadly be categorized into anthropogenic and natural factors. For the anthropogenic factors, increased wood fuel collection, clearing of forests for agriculture, illegal and poorly regulated timber extraction, social and environmental conflicts, increasing urbanization and industrialization are the primary known causes for the loss of forests and woodlands. Mineral exploitation in the forest areas is considered as one of the major causes of deforestation in Ghana. Mining activities especially mining of gold by both the licensed mining companies and illegal mining groups who are locally known as "gallantly mining" also cause damage to the nation's forest reserves. Several works have been conducted regarding the causes of the high rate of deforestation in Ghana, major attention has been placed on illegal logging and using forest lands for illegal farming and mining activities. Less emphasis has been placed on the timber production companies on their harvesting methods in the forests in Ghana and other activities that are carried out in the forest. The main objective of the work is to find out the harvesting methods and the activities of the timber production companies and their effects on the forests in Ghana. Both qualitative and quantitative research methods were engaged in the research work. The study population comprised of 20 Timber industries (Sawmills) forest areas of Ghana. These companies were selected randomly. The cluster sampling technique was engaged in selecting the respondents. Both primary and secondary data were employed. In the study, it was observed that most of the timber production companies do not know the age, the weight, the distance covered from the harvesting to the loading site in the forest. It was also observed that old and heavy machines are used by timber production companies in their operations in the forest, which makes the soil compact prevents regeneration and enhances soil erosion. It was observed that timber production companies do not abide by the rules and regulations governing their operations in the forest. The high rate of corruption on the side of the officials of the Ghana forestry commission makes the officials relax and do not embark on proper monitoring on the operations of the timber production companies which makes the timber companies to cause more harm to the forest. In other to curb this situation the Ghana forestry commission with the ministry of lands and natural resources should monitor the activities of the timber production companies and sanction all the companies that make foul play in their activities in the forest. The commission should also pay more attention to the policy “fell one plant 10” to enhance regeneration in both reserves and off-reserves forest.

Keywords: companies, deforestation, forest, Ghana, timber

Procedia PDF Downloads 192
277 Comparative Investigation of Two Non-Contact Prototype Designs Based on a Squeeze-Film Levitation Approach

Authors: A. Almurshedi, M. Atherton, C. Mares, T. Stolarski, M. Miyatake

Abstract:

Transportation and handling of delicate and lightweight objects is currently a significant issue in some industries. Two common contactless movement prototype designs, ultrasonic transducer design and vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation, and this study aims to identify the limitations, and challenges of each. The designs are evaluated in terms of levitation capabilities, and characteristics. To this end, theoretical and experimental explorations are made. It is demonstrated that the ultrasonic transducer prototype design is better suited to the terms of levitation capabilities. However, the design has some operating and mechanical designing difficulties. For making accurate industrial products in micro-fabrication and nanotechnology contexts, such as semiconductor silicon wafers, micro-components and integrated circuits, non-contact oil-free, ultra-precision and low wear transport along the production line is crucial for enabling. One of the designs (design A) is called the ultrasonic chuck, for which an ultrasonic transducer (Langevin, FBI 28452 HS) comprises the main part. Whereas the other (design B), is a vibrating plate design, which consists of a plain rectangular plate made of Aluminium firmly fastened at both ends. The size of the rectangular plate is 200x100x2 mm. In addition, four rounded piezoelectric actuators of size 28 mm diameter with 0.5 mm thickness are glued to the underside of the plate. The vibrating plate is clamped at both ends in the horizontal plane through a steel supporting structure. In addition, the dynamic of levitation using the designs (A and B) has been investigated based on the squeeze film levitation (SFL). The input apparatus that is used with designs consist of a sine wave signal generator connected to an amplifier type ENP-1-1U (Echo Electronics). The latter has to be utilised to magnify the sine wave voltage that is produced by the signal generator. The measurements of the maximum levitation for three different semiconductor wafers of weights 52, 70 and 88 [g] for design A are 240, 205 and 187 [um], respectively. Whereas the physical results show that the average separation distance for a disk of 5 [g] weight for design B reaches 70 [um]. By using the methodology of squeeze film levitation, it is possible to hold an object in a non-contact manner. The analyses of the investigation outcomes signify that the non-contact levitation of design A provides more improvement than design B. However, design A is more complicated than design B in terms of its manufacturing. In order to identify an adequate non-contact SFL design, a comparison between two common such designs has been adopted for the current investigation. Specifically, the study will involve making comparisons in terms of the following issues: floating component geometries and material type constraints; final created pressure distributions; dangerous interactions with the surrounding space; working environment constraints; and complication and compactness of the mechanical design. Considering all these matters is essential for proficiently distinguish the better SFL design.

Keywords: ANSYS, floating, piezoelectric, squeeze-film

Procedia PDF Downloads 147
276 An Early Intervention Framework for Supporting Students’ Mathematical Development in the Transition to University STEM Programmes

Authors: Richard Harrison

Abstract:

Developing competency in mathematics and related critical thinking skills is essential to the education of undergraduate students of Science, Technology, Engineering and Mathematics (STEM). Recently, the HE sector has been impacted by a seemingly widening disconnect between the mathematical competency of incoming first-year STEM students and their entrance qualification tariffs. Despite relatively high grades in A-Level Mathematics, students may initially lack fundamental skills in key areas such as algebraic manipulation and have limited capacity to apply problem solving strategies. Compounded by compensatory measures applied to entrance qualifications during the pandemic, there has been an associated decline in student performance on introductory university mathematics modules. In the UK, a number of online resources have been developed to help scaffold the transition to university mathematics. However, in general, these do not offer a structured learning journey focused on individual developmental needs, nor do they offer an experience coherent with the teaching and learning characteristics of the destination institution. In order to address some of these issues, a bespoke framework has been designed and implemented on our VLE in the Faculty of Engineering & Physical Sciences (FEPS) at the University of Surrey. Called the FEPS Maths Support Framework, it was conceived to scaffold the mathematical development of individuals prior to entering the university and during the early stages of their transition to undergraduate studies. More than 90% of our incoming STEM students voluntarily participate in the process. Students complete a set of initial diagnostic questions in the late summer. Based on their performance and feedback on these questions, they are subsequently guided to self-select specific mathematical topic areas for review using our proprietary resources. This further assists students in preparing for discipline related diagnostic tests. The framework helps to identify students who are mathematically weak and facilitates early intervention to support students according to their specific developmental needs. This paper presents a summary of results from a rich data set captured from the framework over a 3-year period. Quantitative data provides evidence that students have engaged and developed during the process. This is further supported by process evaluation feedback from the students. Ranked performance data associated with seven key mathematical topic areas and eight engineering and science discipline areas reveals interesting patterns which can be used to identify more generic relative capabilities of the discipline area cohorts. In turn, this facilitates evidence based management of the mathematical development of the new cohort, informing any associated adjustments to teaching and learning at a more holistic level. Evidence is presented establishing our framework as an effective early intervention strategy for addressing the sector-wide issue of supporting the mathematical development of STEM students transitioning to HE

Keywords: competency, development, intervention, scaffolding

Procedia PDF Downloads 59
275 Contraception in Guatemala, Panajachel and the Surrounding Areas: Barriers Affecting Women’s Contraceptive Usage

Authors: Natasha Bhate

Abstract:

Contraception is important in helping to reduce maternal and infant mortality rates by allowing women to control the number and spacing in-between their children. It also reduces the need for unsafe abortions. Women worldwide use contraception; however, the contraceptive prevalence rate is still relatively low in Central American countries like Guatemala. There is also an unmet need for contraception in Guatemala, which is more significant in rural, indigenous women due to barriers preventing contraceptive use. The study objective was to investigate and analyse the current barriers women face, in Guatemala, Panajachel and the surrounding areas, in using contraception, with a view of identifying ways to overcome these barriers. This included exploring the contraceptive barriers women believe exist and the influence of males in contraceptive decision making. The study took place at a charity in Panajachel, Guatemala, and had a cross-sectional, qualitative design to allow an in-depth understanding of information gathered. This particular study design was also chosen to help inform the charity with qualitative research analysis, in view of their intent to create a local reproductive health programme. A semi-structured interview design, including photo facilitation to improve cross-cultural communication, with interpreter assistance, was utilized. A pilot interview was initially conducted with small improvements required. Participants were recruited through purposive and convenience sampling. The study host at the charity acted as a gatekeeper; participants were identified through attendance of the charity’s women’s-initiative programme workshops. 20 participants were selected and agreed to study participation with two not attending; a total of 18 participants were interviewed in June 2017. Interviews were audio-recorded and data were stored on encrypted memory sticks. Framework analysis was used to analyse the data using NVivo11 software. The University of Leeds granted ethical approval for the research. Religion, language, the community, and fear of sickness were examples of existing contraceptive barrier themes recognized by many participants. The influence of men was also an important barrier identified, with themes of machismo and abuse preventing contraceptive use in some women. Women from more rural areas were believed to still face barriers which some participants did not encounter anymore, such as distance and affordability of contraceptives. Participants believed that informative workshops in various settings were an ideal method of overcoming existing contraceptive barriers and allowing women to be more empowered. The involvement of men in such workshops was also deemed important by participants to help reduce their negative influence in contraceptive usage. Overall, four recommendations following this study were made, including contraceptive educational courses, a gender equality campaign, couple-focused contraceptive workshops, and further qualitative research to gain a better insight into men’s opinions regarding women using contraception.

Keywords: barrier, contraception, machismo, religion

Procedia PDF Downloads 122
274 Strategic Interventions to Address Health Workforce and Current Disease Trends, Nakuru, Kenya

Authors: Paul Moses Ndegwa, Teresia Kabucho, Lucy Wanjiru, Esther Wanjiru, Brian Githaiga, Jecinta Wambui

Abstract:

Health outcome has improved in the country since 2013 following the adoption of the new constitution in Kenya with devolved governance with administration and health planning functions transferred to county governments. 2018-2022 development agenda prioritized universal healthcare coverage, food security, and nutrition, however, the emergence of Covid-19 and the increase of non-communicable diseases pose a challenge and constrain in an already overwhelmed health system. A study was conducted July-November 2021 to establish key challenges in achieving universal healthcare coverage within the county and best practices for improved non-communicable disease control. 14 health workers ranging from nurses, doctors, public health officers, clinical officers, and pharmaceutical technologists were purposely engaged to provide critical information through questionnaires by a trained duo observing ethical procedures on confidentiality. Data analysis. Communicable diseases are major causes of morbidity and mortality. Non-communicable diseases contribute to approximately 39% of deaths. More than 45% of the population does not have access to safe drinking water. Study noted geographic inequality with respect to distribution and use of health resources including competing non-health priorities. 56% of health workers are nurses, 13% clinical officers, 7% doctors, 9%public health workers, 2% are pharmaceutical technologists. Poor-quality data limits the validity of disease-burdened estimates and research activities. Risk factors include unsafe water, sanitation, hand washing, unsafe sex, and malnutrition. Key challenge in achieving universal healthcare coverage is the rise in the relative contribution of non-communicable diseases. Improve targeted disease control with effective and equitable resource allocation. Develop high infectious disease control mechanisms. Improvement of quality data for decision making. Strengthen electronic data-capture systems. Increase investments in the health workforce to improve health service provision and achievement of universal health coverage. Create a favorable environment to retain health workers. Fill in staffing gaps resulting in shortages of doctors (7%). Develop a multi-sectional approach to health workforce planning and management. Need to invest in mechanisms that generate contextual evidence on current and future health workforce needs. Ensure retention of qualified, skilled, and motivated health workforce. Deliver integrated people-centered health services.

Keywords: multi-sectional approach, equity, people-centered, health workforce retention

Procedia PDF Downloads 107
273 The Forms of Representation in Architectural Design Teaching: The Cases of Politecnico Di Milano and Faculty of Architecture of the University of Porto

Authors: Rafael Sousa Santos, Clara Pimena Do Vale, Barbara Bogoni, Poul Henning Kirkegaard

Abstract:

The representative component, a determining aspect of the architect's training, has been marked by an exponential and unprecedented development. However, the multiplication of possibilities has also multiplied uncertainties about architectural design teaching, and by extension, about the very principles of architectural education. In this paper, it is intended to present the results of a research developed on the following problem: the relation between the forms of representation and the architectural design teaching-learning processes. The research had as its object the educational model of two schools – the Politecnico di Milano (POLIMI) and the Faculty of Architecture of the University of Porto (FAUP) – and was led by three main objectives: to characterize the educational model followed in both schools focused on the representative component and its role; to interpret the relation between forms of representation and the architectural design teaching-learning processes; to consider their possibilities of valorisation. Methodologically, the research was conducted according to a qualitative embedded multiple-case study design. The object – i.e., the educational model – was approached in both POLIMI and FAUP cases considering its Context and three embedded unities of analysis: the educational Purposes, Principles, and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is assumed; the architectural design classes, expressing how the model is achieved; and the students, expressing how the model is acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal the importance of the representative component in the educational model of both cases, despite the differences in its role. In POLIMI's model, representation is particularly relevant in the teaching of architectural design, while in FAUP’s model, it plays a transversal role – according to an idea of 'general training through hand drawing'. In fact, the difference between models relative to representation can be partially understood by the level of importance that each gives to hand drawing. Regarding the teaching of architectural design, the two cases are distinguished in the relation with the representative component: while in POLIMI the forms of representation serve essentially an instrumental purpose, in FAUP they tend to be considered also for their methodological dimension. It seems that the possibilities for valuing these models reside precisely in the relation between forms of representation and architectural design teaching. It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance of the educational model of POLIMI and FAUP; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the forms of representation and its relation with architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.

Keywords: architectural design teaching, architectural education, educational models, forms of representation

Procedia PDF Downloads 115
272 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 261
271 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study

Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria

Abstract:

Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.

Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations

Procedia PDF Downloads 114
270 Parametric Study for Obtaining the Structural Response of Segmental Tunnels in Soft Soil by Using No-Linear Numerical Models

Authors: Arturo Galván, Jatziri Y. Moreno-Martínez, Israel Enrique Herrera Díaz, José Ramón Gasca Tirado

Abstract:

In recent years, one of the methods most used for the construction of tunnels in soft soil is the shield-driven tunneling. The advantage of this construction technique is that it allows excavating the tunnel while at the same time a primary lining is placed, which consists of precast segments. There are joints between segments, also called longitudinal joints, and joints between rings (called as circumferential joints). This is the reason because of this type of constructions cannot be considered as a continuous structure. The effect of these joints influences in the rigidity of the segmental lining and therefore in its structural response. A parametric study was performed to take into account the effect of different parameters in the structural response of typical segmental tunnels built in soft soil by using non-linear numerical models based on Finite Element Method by means of the software package ANSYS v. 11.0. In the first part of this study, two types of numerical models were performed. In the first one, the segments were modeled by using beam elements based on Timoshenko beam theory whilst the segment joints were modeled by using inelastic rotational springs considering the constitutive moment-rotation relation proposed by Gladwell. In this way, the mechanical behavior of longitudinal joints was simulated. On the other hand for simulating the mechanical behavior of circumferential joints elastic springs were considered. As well as, the stability given by the soil was modeled by means of elastic-linear springs. In the second type of models, the segments were modeled by means of three-dimensional solid elements and the joints with contact elements. In these models, the zone of the joints is modeled as a discontinuous (increasing the computational effort) therefore a discrete model is obtained. With these contact elements the mechanical behavior of joints is simulated considering that when the joint is closed, there is transmission of compressive and shear stresses but not of tensile stresses and when the joint is opened, there is no transmission of stresses. This type of models can detect changes in the geometry because of the relative movement of the elements that form the joints. A comparison between the numerical results with two types of models was carried out. In this way, the hypothesis considered in the simplified models were validated. In addition, the numerical models were calibrated with (Lab-based) experimental results obtained from the literature of a typical tunnel built in Europe. In the second part of this work, a parametric study was performed by using the simplified models due to less used computational effort compared to complex models. In the parametric study, the effect of material properties, the geometry of the tunnel, the arrangement of the longitudinal joints and the coupling of the rings were studied. Finally, it was concluded that the mechanical behavior of segment and ring joints and the arrangement of the segment joints affect the global behavior of the lining. As well as, the effect of the coupling between rings modifies the structural capacity of the lining.

Keywords: numerical models, parametric study, segmental tunnels, structural response

Procedia PDF Downloads 224
269 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 67
268 Impact of Insect-Feeding and Fire-Heating Wounding on Wood Properties of Lodgepole Pine

Authors: Estelle Arbellay, Lori D. Daniels, Shawn D. Mansfield, Alice S. Chang

Abstract:

Mountain pine beetle (MPB) outbreaks are currently devastating lodgepole pine forests in western North America, which are also widely disturbed by frequent wildfires. Both MPB and fire can leave scars on lodgepole pine trees, thereby diminishing their commercial value and possibly compromising their utilization in solid wood products. In order to fully exploit the affected resource, it is crucial to understand how wounding from these two disturbance agents impact wood properties. Moreover, previous research on lodgepole pine has focused solely on sound wood and stained wood resulting from the MPB-transmitted blue fungi. By means of a quantitative multi-proxy approach, we tested the hypotheses that (i) wounding (of either MPB or fire origin) caused significant changes in wood properties of lodgepole pine and that (ii) MPB-induced wound effects could differ from those induced by fire in type and magnitude. Pith-to-bark strips were extracted from 30 MPB scars and 30 fire scars. Strips were cut immediately adjacent to the wound margin and encompassed 12 rings from normal wood formed prior to wounding and 12 rings from wound wood formed after wounding. Wood properties evaluated within this 24-year window included ring width, relative wood density, cellulose crystallinity, fibre dimensions, and carbon and nitrogen concentrations. Methods used to measure these proxies at a (sub-)annual resolution included X-ray densitometry, X-ray diffraction, fibre quality analysis, and elemental analysis. Results showed a substantial growth release in wound wood compared to normal wood, as both earlywood and latewood width increased over a decade following wounding. Wound wood was also shown to have a significantly different latewood density than normal wood 4 years after wounding. Latewood density decreased in MPB scars while the opposite was true in fire scars. By contrast, earlywood density was presented only minor variations following wounding. Cellulose crystallinity decreased in wound wood compared to normal wood, being especially diminished in MPB scars the first year after wounding. Fibre dimensions also decreased following wounding. However, carbon and nitrogen concentrations did not substantially differ between wound wood and normal wood. Nevertheless, insect-feeding and fire-heating wounding were shown to significantly alter most wood properties of lodgepole pine, as demonstrated by the existence of several morphological anomalies in wound wood. MPB and fire generally elicited similar anomalies, with the major exception of latewood density. In addition to providing quantitative criteria for differentiating between biotic (MPB) and abiotic (fire) disturbances, this study provides the wood industry with fundamental information on the physiological response of lodgepole pine to wounding in order to evaluate the utilization of scarred trees in solid wood products.

Keywords: elemental analysis, fibre quality analysis, lodgepole pine, wood properties, wounding, X-ray densitometry, X-ray diffraction

Procedia PDF Downloads 314
267 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 77
266 Numerical Investigation of the Influence on Buckling Behaviour Due to Different Launching Bearings

Authors: Nadine Maier, Martin Mensinger, Enea Tallushi

Abstract:

In general, today, two types of launching bearings are used in the construction of large steel and steel concrete composite bridges. These are sliding rockers and systems with hydraulic bearings. The advantages and disadvantages of the respective systems are under discussion. During incremental launching, the center of the webs of the superstructure is not perfectly in line with the center of the launching bearings due to unavoidable tolerances, which may have an influence on the buckling behavior of the web plates. These imperfections are not considered in the current design against plate buckling, according to DIN EN 1993-1-5. It is therefore investigated whether the design rules have to take into account any eccentricities which occur during incremental launching and also if this depends on the respective launching bearing. Therefore, at the Technical University Munich, large-scale buckling tests were carried out on longitudinally stiffened plates under biaxial stresses with the two different types of launching bearings and eccentric load introduction. Based on the experimental results, a numerical model was validated. Currently, we are evaluating different parameters for both types of launching bearings, such as load introduction length, load eccentricity, the distance between longitudinal stiffeners, the position of the rotation point of the spherical bearing, which are used within the hydraulic bearings, web, and flange thickness and imperfections. The imperfection depends on the geometry of the buckling field and whether local or global buckling occurs. This and also the size of the meshing is taken into account in the numerical calculations of the parametric study. As a geometric imperfection, the scaled first buckling mode is applied. A bilinear material curve is used so that a GMNIA analysis is performed to determine the load capacity. Stresses and displacements are evaluated in different directions, and specific stress ratios are determined at the critical points of the plate at the time of the converging load step. To evaluate the load introduction of the transverse load, the transverse stress concentration is plotted on a defined longitudinal section on the web. In the same way, the rotation of the flange is evaluated in order to show the influence of the different degrees of freedom of the launching bearings under eccentric load introduction and to be able to make an assessment for the case, which is relevant in practice. The input and the output are automatized and depend on the given parameters. Thus we are able to adapt our model to different geometric dimensions and load conditions. The programming is done with the help of APDL and a Python code. This allows us to evaluate and compare more parameters faster. Input and output errors are also avoided. It is, therefore, possible to evaluate a large spectrum of parameters in a short time, which allows a practical evaluation of different parameters for buckling behavior. This paper presents the results of the tests as well as the validation and parameterization of the numerical model and shows the first influences on the buckling behavior under eccentric and multi-axial load introduction.

Keywords: buckling behavior, eccentric load introduction, incremental launching, large scale buckling tests, multi axial stress states, parametric numerical modelling

Procedia PDF Downloads 106
265 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA

Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray

Abstract:

Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.

Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration

Procedia PDF Downloads 62
264 Short and Long Crack Growth Behavior in Ferrite Bainite Dual Phase Steels

Authors: Ashok Kumar, Shiv Brat Singh, Kalyan Kumar Ray

Abstract:

There is growing awareness to design steels against fatigue damage Ferrite martensite dual-phase steels are known to exhibit favourable mechanical properties like good strength, ductility, toughness, continuous yielding, and high work hardening rate. However, dual-phase steels containing bainite as second phase are potential alternatives for ferrite martensite steels for certain applications where good fatigue property is required. Fatigue properties of dual phase steels are popularly assessed by the nature of variation of crack growth rate (da/dN) with stress intensity factor range (∆K), and the magnitude of fatigue threshold (∆Kth) for long cracks. There exists an increased emphasis to understand not only the long crack fatigue behavior but also short crack growth behavior of ferrite bainite dual phase steels. The major objective of this report is to examine the influence of microstructures on the short and long crack growth behavior of a series of developed dual-phase steels with varying amounts of bainite and. Three low carbon steels containing Nb, Cr and Mo as microalloying elements steels were selected for making ferrite-bainite dual-phase microstructures by suitable heat treatments. The heat treatment consisted of austenitizing the steel at 1100°C for 20 min, cooling at different rates in air prior to soaking these in a salt bath at 500°C for one hour, and finally quenching in water. Tensile tests were carried out on 25 mm gauge length specimens with 5 mm diameter using nominal strain rate 0.6x10⁻³ s⁻¹ at room temperature. Fatigue crack growth studies were made on a recently developed specimen configuration using a rotating bending machine. The crack growth was monitored by interrupting the test and observing the specimens under an optical microscope connected to an Image analyzer. The estimated crack lengths (a) at varying number of cycles (N) in different fatigue experiments were analyzed to obtain log da/dN vs. log °∆K curves for determining ∆Kthsc. The microstructural features of these steels have been characterized and their influence on the near threshold crack growth has been examined. This investigation, in brief, involves (i) the estimation of ∆Kthsc and (ii) the examination of the influence of microstructure on short and long crack fatigue threshold. The maximum fatigue threshold values obtained from short crack growth experiments on various specimens of dual-phase steels containing different amounts of bainite are found to increase with increasing bainite content in all the investigated steels. The variations of fatigue behavior of the selected steel samples have been explained with the consideration of varying amounts of the constituent phases and their interactions with the generated microstructures during cyclic loading. Quantitative estimation of the different types of fatigue crack paths indicates that the propensity of a crack to pass through the interfaces depends on the relative amount of the microstructural constituents. The fatigue crack path is found to be predominantly intra-granular except for the ones containing > 70% bainite in which it is predominantly inter-granular.

Keywords: bainite, dual phase steel, fatigue crack growth rate, long crack fatigue threshold, short crack fatigue threshold

Procedia PDF Downloads 201
263 Smallholder’s Agricultural Water Management Technology Adoption, Adoption Intensity and Their Determinants: The Case of Meda Welabu Woreda, Oromia, Ethiopia

Authors: Naod Mekonnen Anega

Abstract:

The very objective of this paper was to empirically identify technology tailored determinants to the adoption and adoption intensity (extent of use) of agricultural water management technologies in Meda Welabu Woreda, Oromia regional state, Ethiopia. Meda Welabu Woreda which is one of the administrative Woredas of the Oromia regional state was selected purposively as this Woreda is one of the Woredas in the region where small scale irrigation practices and the use of agricultural water management technologies can be found among smallholders. Using the existence water management practices (use of water management technologies) and land use pattern as a criterion Genale Mekchira Kebele is selected to undergo the study. A total of 200 smallholders were selected from the Kebele using the technique developed by Krejeie and Morgan. The study employed the Logit and Tobit models to estimate and identify the economic, social, geographical, household, institutional, psychological, technological factors that determine adoption and adoption intensity of water management technologies. The study revealed that while 55 of the sampled households are adopters of agricultural water management technology the rest 140 were non adopters of the technologies. Among the adopters included in the sample 97% are using river diversion technology (traditional) with traditional canal while the rest 7% percent are using pond with treadle pump technology. The Logit estimation reveled that while adoption of river diversion is positively and significantly affected by membership to local institutions, active labor force, income, access to credit and land ownership, adoption of treadle pump technology is positively and significantly affected by family size, education level, access to credit, extension contact, income, access to market, and slope. The Logit estimation also revealed that whereas, group action requirement, distance to farm, and size of active labor force negative and significantly influenced adoption of river diversion, age and perception has negatively and significantly influenced adoption decision of treadle pump technology. On the other hand, the Tobit estimation reveled that while adoption intensity (extent of use) of agricultural water management is positively and significantly affected by education, credit, and extension contact, access to credit, access to market and income. This study revealed that technology tailored study on adoption of Agricultural water management technologies (AWMTs) should be considered to indentify and scale up best agricultural water management practices. In fact, in countries like Ethiopia, where there is difference in social, economic, cultural, environmental and agro ecological conditions even within the same Kebele technology tailored study that fit the condition of each Kebele would help to identify and scale up best practices in agricultural water management.

Keywords: water management technology, adoption, adoption intensity, smallholders, technology tailored approach

Procedia PDF Downloads 445
262 Fischer Tropsch Synthesis in Compressed Carbon Dioxide with Integrated Recycle

Authors: Kanchan Mondal, Adam Sims, Madhav Soti, Jitendra Gautam, David Carron

Abstract:

Fischer-Tropsch (FT) synthesis is a complex series of heterogeneous reactions between CO and H2 molecules (present in the syngas) on the surface of an active catalyst (Co, Fe, Ru, Ni, etc.) to produce gaseous, liquid, and waxy hydrocarbons. This product is composed of paraffins, olefins, and oxygenated compounds. The key challenge in applying the Fischer-Tropsch process to produce transportation fuels is to make the capital and production costs economically feasible relative to the comparative cost of existing petroleum resources. To meet this challenge, it is imperative to enhance the CO conversion while maximizing carbon selectivity towards the desired liquid hydrocarbon ranges (i.e. reduction in CH4 and CO2 selectivities) at high throughputs. At the same time, it is equally essential to increase the catalyst robustness and longevity without sacrificing catalyst activity. This paper focuses on process development to achieve the above. The paper describes the influence of operating parameters on Fischer Tropsch synthesis (FTS) from coal derived syngas in supercritical carbon dioxide (ScCO2). In addition, the unreacted gas and solvent recycle was incorporated and the effect of unreacted feed recycle was evaluated. It was expected that with the recycle, the feed rate can be increased. The increase in conversion and liquid selectivity accompanied by the production of narrower carbon number distribution in the product suggest that higher flow rates can and should be used when incorporating exit gas recycle. It was observed that this process was capable of enhancing the hydrocarbon selectivity (nearly 98 % CO conversion), reducing improving the carbon efficiency from 17 % to 51 % in a once through process and further converting 16 % CO2 to liquid with integrated recycle of the product gas stream and increasing the life of the catalyst. Catalyst robustness enhancement has been attributed to the absorption of heat of reaction by the compressed CO2 which reduced the formation of hotspots and the dissolution of waxes by the CO2 solvent which reduced the blinding of active sites. In addition, the recycling the product gas stream reduced the reactor footprint to one-fourth of the once through size and product fractionation utilizing the solvent effects of supercritical CO2 were realized. In addition to the negative CO2 selectivities, methane production was also inhibited and was limited to less than 1.5%. The effect of the process conditions on the life of the catalysts will also be presented. Fe based catalysts are known to have a high proclivity for producing CO2 during FTS. The data of the product spectrum and selectivity on Co and Fe-Co based catalysts as well as those obtained from commercial sources will also be presented. The measurable decision criteria were the increase in CO conversion at H2:CO ratio of 1:1 (as commonly found in coal gasification product stream) in supercritical phase as compared to gas phase reaction, decrease in CO2 and CH4 selectivity, overall liquid product distribution, and finally an increase in the life of the catalysts.

Keywords: carbon efficiency, Fischer Tropsch synthesis, low GHG, pressure tunable fractionation

Procedia PDF Downloads 235
261 Material Handling Equipment Selection Using Fuzzy AHP Approach

Authors: Priyanka Verma, Vijaya Dixit, Rishabh Bajpai

Abstract:

This research paper is aimed at selecting appropriate material handling equipment among the given choices so that the automation level in material handling can be enhanced. This work is a practical case scenario of material handling systems in consumer electronic appliances manufacturing organization. The choices of material handling equipment among which the decision has to be made are Automated Guided Vehicle’s (AGV), Autonomous Mobile Robots (AMR), Overhead Conveyer’s (OC) and Battery Operated Trucks/Vehicle’s (BOT). There is a need of attaining a certain level of automation in order to reduce human interventions in the organization. This requirement of achieving certain degree of automation can be attained by material handling equipment’s mentioned above. The main motive for selecting above equipment’s for study was solely based on corporate financial strategy of investment and return obtained through that investment made in stipulated time framework. Since the low cost automation with respect to material handling devices has to be achieved hence these equipment’s were selected. Investment to be done on each unit of this equipment is less than 20 lakh rupees (INR) and the recovery period is less than that of five years. Fuzzy analytic hierarchic process (FAHP) is applied here for selecting equipment where the four choices are evaluated on basis of four major criteria’s and 13 sub criteria’s, and are prioritized on the basis of weight obtained. The FAHP used here make use of triangular fuzzy numbers (TFN). The inability of the traditional AHP in order to deal with the subjectiveness and impreciseness in the pair-wise comparison process has been improved in the FAHP. The range of values for general rating purposes for all decision making parameters is kept between 0 and 1 on the basis of expert opinions captured on shop floor. These experts were familiar with operating environment and shop floor activity control. Instead of generating exact value the FAHP generates the ranges of values to accommodate the uncertainty in decision-making process. The four major criteria’s selected for the evaluation of choices of material handling equipment’s available are materials, technical capabilities, cost and other features. The thirteen sub criteria’s listed under these following four major criteria’s are weighing capacity, load per hour, material compatibility, capital cost, operating cost and maintenance cost, speed, distance moved, space required, frequency of trips, control required, safety and reliability issues. The key finding shows that among the four major criteria selected, cost is emerged as the most important criteria and is one of the key decision making aspect on the basis of which material equipment selection is based on. While further evaluating the choices of equipment available for each sub criteria it is found that AGV scores the highest weight in most of the sub-criteria’s. On carrying out complete analysis the research shows that AGV is the best material handling equipment suiting all decision criteria’s selected in FAHP and therefore it is beneficial for the organization to carry out automated material handling in the facility using AGV’s.

Keywords: fuzzy analytic hierarchy process (FAHP), material handling equipment, subjectiveness, triangular fuzzy number (TFN)

Procedia PDF Downloads 432
260 Improving Online Learning Engagement through a Kid-Teach-Kid Approach for High School Students during the Pandemic

Authors: Alexander Huang

Abstract:

Online learning sessions have become an indispensable complement to in-classroom-learning sessions in the past two years due to the emergence of Covid-19. Due to social distance requirements, many courses and interaction-intensive sessions, ranging from music classes to debate camps, are online. However, online learning imposes a significant challenge for engaging students effectively during the learning sessions. To resolve this problem, Project PWR, a non-profit organization formed by high school students, developed an online kid-teach-kid learning environment to boost students' learning interests and further improve students’ engagement during online learning. Fundamentally, the kid-teach-kid learning model creates an affinity space to form learning groups, where like-minded peers can learn and teach their interests. The role of the teacher can also help a kid identify the instructional task and set the rules and procedures for the activities. The approach also structures initial discussions to reveal a range of ideas, similar experiences, thinking processes, language use, and lower student-to-teacher ratio, which become enriched online learning experiences for upcoming lessons. In such a manner, a kid can practice both the teacher role and the student role to accumulate experiences on how to convey ideas and questions over the online session more efficiently and effectively. In this research work, we conducted two case studies involving a 3D-Design course and a Speech and Debate course taught by high-school kids. Through Project PWR, a kid first needs to design the course syllabus based on a provided template to become a student-teacher. Then, the Project PWR academic committee evaluates the syllabus and offers comments and suggestions for changes. Upon the approval of a syllabus, an experienced and voluntarily adult mentor is assigned to interview the student-teacher and monitor the lectures' progress. Student-teachers construct a comprehensive final evaluation for their students, which they grade at the end of the course. Moreover, each course requires conducting midterm and final evaluations through a set of surveyed replies provided by students to assess the student-teacher’s performance. The uniqueness of Project PWR lies in its established kid-teach-kids affinity space. Our research results showed that Project PWR could create a closed-loop system where a student can help a teacher improve and vice versa, thus improving the overall students’ engagement. As a result, Project PWR’s approach can train teachers and students to become better online learners and give them a solid understanding of what to prepare for and what to expect from future online classes. The kid-teach-kid learning model can significantly improve students' engagement in the online courses through the Project PWR to effectively supplement the traditional teacher-centric model that the Covid-19 pandemic has impacted substantially. Project PWR enables kids to share their interests and bond with one another, making the online learning environment effective and promoting positive and effective personal online one-on-one interactions.

Keywords: kid-teach-kid, affinity space, online learning, engagement, student-teacher

Procedia PDF Downloads 139
259 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions

Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa

Abstract:

Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.

Keywords: cubesat, deorbitation, sail, space, debris

Procedia PDF Downloads 284
258 Soft Pneumatic Actuators Fabricated Using Soluble Polymer Inserts and a Single-Pour System for Improved Durability

Authors: Alexander Harrison Greer, Edward King, Elijah Lee, Safa Obuz, Ruhao Sun, Aditya Sardesai, Toby Ma, Daniel Chow, Bryce Broadus, Calvin Costner, Troy Barnes, Biagio DeSimone, Yeshwin Sankuratri, Yiheng Chen, Holly Golecki

Abstract:

Although a relatively new field, soft robotics is experiencing a rise in applicability in the secondary school setting through The Soft Robotics Toolkit, shared fabrication resources and a design competition. Exposing students outside of university research groups to this rapidly growing field allows for development of the soft robotics industry in new and imaginative ways. Soft robotic actuators have remained difficult to implement in classrooms because of their relative cost or difficulty of fabrication. Traditionally, a two-part molding system is used; however, this configuration often results in delamination. In an effort to make soft robotics more accessible to young students, we aim to develop a simple, single-mold method of fabricating soft robotic actuators from common household materials. These actuators are made by embedding a soluble polymer insert into silicone. These inserts can be made from hand-cut polystyrene, 3D-printed polyvinyl alcohol (PVA) or acrylonitrile butadiene styrene (ABS), or molded sugar. The insert is then dissolved using an appropriate solvent such as water or acetone, leaving behind a negative form which can be pneumatically actuated. The resulting actuators are seamless, eliminating the instability of adhering multiple layers together. The benefit of this approach is twofold: it simplifies the process of creating a soft robotic actuator, and in turn, increases its effectiveness and durability. To quantify the increased durability of the single-mold actuator, it was tested against the traditional two-part mold. The single-mold actuator could withstand actuation at 20psi for 20 times the duration when compared to the traditional method. The ease of fabrication of these actuators makes them more accessible to hobbyists and students in classrooms. After developing these actuators, they were applied, in collaboration with a ceramics teacher at our school, to a glove used to transfer nuanced hand motions used to throw pottery from an expert artist to a novice. We quantified the improvement in the users’ pottery-making skill when wearing the glove using image analysis software. The seamless actuators proved to be robust in this dynamic environment. Seamless soft robotic actuators created by high school students show the applicability of the Soft Robotics Toolkit for secondary STEM education and outreach. Making students aware of what is possible through projects like this will inspire the next generation of innovators in materials science and robotics.

Keywords: pneumatic actuator fabrication, soft robotic glove, soluble polymers, STEM outreach

Procedia PDF Downloads 128
257 Mapping and Measuring the Vulnerability Level of the Belawan District Community in Encountering the Rob Flood Disaster

Authors: Dessy Pinem, Rahmadian Sembiring, Adanil Bushra

Abstract:

Medan Belawan is one of the subdistricts of 21 districts in Medan. Medan Belawan Sub-district is directly adjacent to the Malacca Strait in the North. Due to its direct border with the Malacca Strait, the problem in this sub-district, which has continued for many years, is a flood of rob. In 2015, rob floods inundated Sicanang urban village, Belawan I urban village, Belawan Bahagia urban village and Bagan Deli village. The extent of inundation in the flood of rob that occurred in September 2015 reached 540, 938 ha. Rob flood is a phenomenon where the sea water is overflowing into the mainland. Rob floods can also be interpreted as a puddle of water on the coastal land that occurs when the tidal waters. So this phenomenon will inundate parts of the coastal plain or lower place of high tide sea level. Rob flood is a daily disaster faced by the residents in the district of Medan Belawan. Rob floods can happen every month and last for a week. The flood is not only the residents' houses, the flood also soaked the main road to Belawan Port reaching 50 cm. To deal with the problems caused by the flood and to prepare coastal communities to face the character of coastal areas, it is necessary to know the vulnerability of the people who are always the victims of the rob flood. Are the people of Medan Belawan sub-district, especially in the flood-affected villages, able to cope with the consequences of the floods? To answer this question, it is necessary to assess the vulnerability of the Belawan District community in the face of the flood disaster. This research is descriptive, qualitative and quantitative. Data were collected by observation, interview and questionnaires in 4 urban villages often affected by rob flood. The vulnerabilities measured are physical, economic, social, environmental, organizational and motivational vulnerabilities. For vulnerability in the physical field, the data collected is the distance of the building, floor area ratio, drainage, and building materials. For economic vulnerability, data collected are income, employment, building ownership, and insurance ownership. For the vulnerability in the social field, the data collected is education, number of family members, children, the elderly, gender, training for disasters, and how to dispose of waste. For the vulnerability in the field of organizational data collected is the existence of organizations that advocate for the victims, their policies and laws governing the handling of tidal flooding. The motivational vulnerability is seen from the information center or question and answer about the rob flood, and the existence of an evacuation plan or path to avoid disaster or reduce the victim. The results of this study indicate that most people in Medan Belawan sub-district have a high-level vulnerability in physical, economic, social, environmental, organizational and motivational fields. They have no access to economic empowerment, no insurance, no motivation to solve problems and only hope to the government, not to have organizations that support and defend them, and have physical buildings that are easily destroyed by rob floods.

Keywords: disaster, rob flood, Medan Belawan, vulnerability

Procedia PDF Downloads 122