Search results for: fuzzy goal programming
1052 Image Based Landing Solutions for Large Passenger Aircraft
Authors: Thierry Sammour Sawaya, Heikki Deschacht
Abstract:
In commercial aircraft operations, almost half of the accidents happen during approach or landing phases. Automatic guidance and automatic landings have proven to bring significant safety value added for this challenging landing phase. This is why Airbus and ScioTeq have decided to work together to explore the capability of image-based landing solutions as additional landing aids to further expand the possibility to perform automatic approach and landing to runways where the current guiding systems are either not fitted or not optimum. Current systems for automated landing often depend on radio signals provided by airport ground infrastructure on the airport or satellite coverage. In addition, these radio signals may not always be available with the integrity and performance required for safe automatic landing. Being independent from these radio signals would widen the operations possibilities and increase the number of automated landings. Airbus and ScioTeq are joining their expertise in the field of Computer Vision in the European Program called Clean Sky 2 Large Passenger Aircraft, in which they are leading the IMBALS (IMage BAsed Landing Solutions) project. The ultimate goal of this project is to demonstrate, develop, validate and verify a certifiable automatic landing system guiding an airplane during the approach and landing phases based on an onboard camera system capturing images, enabling automatic landing independent from radio signals and without precision instrument for landing. In the frame of this project, ScioTeq is responsible for the development of the Image Processing Platform (IPP), while Airbus is responsible for defining the functional and system requirements as well as the testing and integration of the developed equipment in a Large Passenger Aircraft representative environment. The aim of this paper will be to describe the system as well as the associated methods and tools developed for validation and verification.Keywords: aircraft landing system, aircraft safety, autoland, avionic system, computer vision, image processing
Procedia PDF Downloads 1011051 Numerical Investigation of the Influence on Buckling Behaviour Due to Different Launching Bearings
Authors: Nadine Maier, Martin Mensinger, Enea Tallushi
Abstract:
In general, today, two types of launching bearings are used in the construction of large steel and steel concrete composite bridges. These are sliding rockers and systems with hydraulic bearings. The advantages and disadvantages of the respective systems are under discussion. During incremental launching, the center of the webs of the superstructure is not perfectly in line with the center of the launching bearings due to unavoidable tolerances, which may have an influence on the buckling behavior of the web plates. These imperfections are not considered in the current design against plate buckling, according to DIN EN 1993-1-5. It is therefore investigated whether the design rules have to take into account any eccentricities which occur during incremental launching and also if this depends on the respective launching bearing. Therefore, at the Technical University Munich, large-scale buckling tests were carried out on longitudinally stiffened plates under biaxial stresses with the two different types of launching bearings and eccentric load introduction. Based on the experimental results, a numerical model was validated. Currently, we are evaluating different parameters for both types of launching bearings, such as load introduction length, load eccentricity, the distance between longitudinal stiffeners, the position of the rotation point of the spherical bearing, which are used within the hydraulic bearings, web, and flange thickness and imperfections. The imperfection depends on the geometry of the buckling field and whether local or global buckling occurs. This and also the size of the meshing is taken into account in the numerical calculations of the parametric study. As a geometric imperfection, the scaled first buckling mode is applied. A bilinear material curve is used so that a GMNIA analysis is performed to determine the load capacity. Stresses and displacements are evaluated in different directions, and specific stress ratios are determined at the critical points of the plate at the time of the converging load step. To evaluate the load introduction of the transverse load, the transverse stress concentration is plotted on a defined longitudinal section on the web. In the same way, the rotation of the flange is evaluated in order to show the influence of the different degrees of freedom of the launching bearings under eccentric load introduction and to be able to make an assessment for the case, which is relevant in practice. The input and the output are automatized and depend on the given parameters. Thus we are able to adapt our model to different geometric dimensions and load conditions. The programming is done with the help of APDL and a Python code. This allows us to evaluate and compare more parameters faster. Input and output errors are also avoided. It is, therefore, possible to evaluate a large spectrum of parameters in a short time, which allows a practical evaluation of different parameters for buckling behavior. This paper presents the results of the tests as well as the validation and parameterization of the numerical model and shows the first influences on the buckling behavior under eccentric and multi-axial load introduction.Keywords: buckling behavior, eccentric load introduction, incremental launching, large scale buckling tests, multi axial stress states, parametric numerical modelling
Procedia PDF Downloads 1071050 Exploring Tree Growth Variables Influencing Carbon Sequestration in the Face of Climate Change
Authors: Funmilayo Sarah Eguakun, Peter Oluremi Adesoye
Abstract:
One of the major problems being faced by human society is that the global temperature is believed to be rising due to human activity that releases carbon IV oxide (CO2) to the atmosphere. Carbon IV oxide is the most important greenhouse gas influencing global warming and possible climate change. With climate change becoming alarming, reducing CO2 in our atmosphere has become a primary goal of international efforts. Forest landsare major sink and could absorb large quantities of carbon if the trees are judiciously managed. The study aims at estimating the carbon sequestration capacity of Pinus caribaea (pine)and Tectona grandis (Teak) under the prevailing environmental conditions and exploring tree growth variables that influencesthe carbon sequestration capacity in Omo Forest Reserve, Ogun State, Nigeria. Improving forest management by manipulating growth characteristics that influences carbon sequestration could be an adaptive strategy of forestry to climate change. Random sampling was used to select Temporary Sample Plots (TSPs) in the study area from where complete enumeration of growth variables was carried out within the plots. The data collected were subjected to descriptive and correlational analyses. The results showed that average carbon stored by Pine and Teak are 994.4±188.3 Kg and 1350.7±180.6 Kg respectively. The difference in carbon stored in the species is significant enough to consider choice of species relevant in climate change adaptation strategy. Tree growth variables influence the capacity of the tree to sequester carbon. Height, diameter, volume, wood density and age are positively correlated to carbon sequestration. These tree growth variables could be manipulated by the forest manager as an adaptive strategy for climate change while plantations of high wood density speciescould be relevant for management strategy to increase carbon storage.Keywords: adaptation, carbon sequestration, climate change, growth variables, wood density
Procedia PDF Downloads 3801049 Automated System: Managing the Production and Distribution of Radiopharmaceuticals
Authors: Shayma Mohammed, Adel Trabelsi
Abstract:
Radiopharmacy is the art of preparing high-quality, radioactive, medicinal products for use in diagnosis and therapy. Radiopharmaceuticals unlike normal medicines, this dual aspect (radioactive, medical) makes their management highly critical. One of the most convincing applications of modern technologies is the ability to delegate the execution of repetitive tasks to programming scripts. Automation has found its way to the most skilled jobs, to improve the company's overall performance by allowing human workers to focus on more important tasks than document filling. This project aims to contribute to implement a comprehensive system to insure rigorous management of radiopharmaceuticals through the use of a platform that links the Nuclear Medicine Service Management System to the Nuclear Radio-pharmacy Management System in accordance with the recommendations of World Health Organization (WHO) and International Atomic Energy Agency (IAEA). In this project we attempt to build a web application that targets radiopharmacies, the platform is built atop the inherently compatible web stack which allows it to work in virtually any environment. Different technologies are used in this project (PHP, Symfony, MySQL Workbench, Bootstrap, Angular 7, Visual Studio Code and TypeScript). The operating principle of the platform is mainly based on two parts: Radiopharmaceutical Backoffice for the Radiopharmacian, who is responsible for the realization of radiopharmaceutical preparations and their delivery and Medical Backoffice for the Doctor, who holds the authorization for the possession and use of radionuclides and he/she is responsible for ordering radioactive products. The application consists of sven modules: Production, Quality Control/Quality Assurance, Release, General Management, References, Transport and Stock Management. It allows 8 classes of users: The Production Manager (PM), Quality Control Manager (QCM), Stock Manager (SM), General Manager (GM), Client (Doctor), Parking and Transport Manager (PTM), Qualified Person (QP) and Technical and Production Staff. Digital platform bringing together all players involved in the use of radiopharmaceuticals and integrating the stages of preparation, production and distribution, Web technologies, in particular, promise to offer all the benefits of automation while requiring no more than a web browser to act as a user client, which is a strength because the web stack is by nature multi-platform. This platform will provide a traceability system for radiopharmaceuticals products to ensure the safety and radioprotection of actors and of patients. The new integrated platform is an alternative to write all the boilerplate paperwork manually, which is a tedious and error-prone task. It would minimize manual human manipulation, which has proven to be the main source of error in nuclear medicine. A codified electronic transfer of information from radiopharmaceutical preparation to delivery will further reduce the risk of maladministration.Keywords: automated system, management, radiopharmacy, technical papers
Procedia PDF Downloads 1561048 Temperature Effect on Changing of Electrical Impedance and Permittivity of Ouargla (Algeria) Dunes Sand at Different Frequencies
Authors: Naamane Remita, Mohammed laïd Mechri, Nouredine Zekri, Smaïl Chihi
Abstract:
The goal of this study is the estimation real and imaginary components of both electrical impedance and permittivity z', z'' and ε', ε'' respectively, in Ouargla dunes sand at different temperatures and different frequencies, with alternating current (AC) equal to 1 volt, using the impedance spectroscopy (IS). This method is simple and non-destructive. the results can frequently be correlated with a number of physical properties, dielectric properties and the impacts of the composition on the electrical conductivity of solids. The experimental results revealed that the real part of impedance is higher at higher temperature in the lower frequency region and gradually decreases with increasing frequency. As for the high frequencies, all the values of the real part of the impedance were positive. But at low frequency the values of the imaginary part were positive at all temperatures except for 1200 degrees which were negative. As for the medium frequencies, the reactance values were negative at temperatures 25, 400, 200 and 600 degrees, and then became positive at the rest of the temperatures. At high frequencies of the order of MHz, the values of the imaginary part of the electrical impedance were in contrast to what we recorded for the middle frequencies. The results showed that the electrical permittivity decreases with increasing frequency, at low frequency we recorded permittivity values of 10+ 11, and at medium frequencies it was 10+ 07, while at high frequencies it was 10+ 02. The values of the real part of the electrical permittivity were taken large values at the temperatures of 200 and 600 degrees Celsius and at the lowest frequency, while the smallest value for the permittivity was recorded at the temperature of 400 degrees Celsius at the highest frequency. The results showed that there are large values of the imaginary part of the electrical permittivity at the lowest frequency and then it starts decreasing as the latter increases (the higher the frequency the lower the values of the imaginary part of the electrical permittivity). The character of electrical impedance variation indicated an opportunity to realize the polarization of Ouargla dunes sand and acquaintance if this compound consumes or produces energy. It’s also possible to know the satisfactory of equivalent electric circuit, whether it’s miles induction or capacitance.Keywords: electrical impedance, electrical permittivity, temperature, impedance spectroscopy, dunes sand ouargla
Procedia PDF Downloads 481047 Afghan Women’s Perceptions on Domestic Violence and Child Protection in Finland
Authors: Laleh Golamrej Eliasi
Abstract:
Finland is the second most violent country for women in the European Union (EU). 47% of women in Finland claimed to have experienced domestic violence against women (DVAW), compared to an average of 33% in the EU. Although the statistics in Finland are transparent, to the author’s best knowledge, there are no statisticsonDV by nationality in Finland. On the other hand, being a Muslim woman in a non-Muslim-majority country represents a position of double vulnerability to violence. There are 10404 Afghan refugees in Finland who are Muslim. Barriers such as unfamiliarity with support services, fear of the police, racism, language, economic and practical dependence, social isolation, and family commitments all lead to a lack of reporting of DVAW among migrants. Although witnessing and experiencing DV have devastating effects on women’s and children’s health and well-being, there is a lack of studies about DVAW among Afghan families in Finland. To fill this knowledge gap, Afghan women living in Finland are selected as the target group to assess their views on DVAW and child protection. This study is implemented in the socio-ecological approach framework to assess the impacts of individual characteristics, interpersonal relationships, community, and society components on DVAW in Afghan families. Interviews with Afghan women and content analysis are used to find out participants' views on DVAW, its risk factors, and approaches and methods to improve protection for women and children. Main purpose is to obtain information about participants' views on the subject. The findings can be used to improve culturally safe social work knowledge and practices with a bottom-up approach to reduce DV and increase child protection. Therefore, this research can have important effects on the sustainable development of services and supports the welfare and inclusion of immigrant families. The expected results will contribute to sustainable gender equality, which is in line with the fifth goal of the Sustainable Development Goals.Keywords: domestic violence, immigrant women, immigrant child protection, social work
Procedia PDF Downloads 841046 The Effect of a Weed-Killer Sulfonylurea on Durum Wheat (Triticum Durum Desf)
Authors: L. Meksem Amara, M. Ferfar, N. Meksem, M. R. Djebar
Abstract:
The wheat is the cereal the most consumed in the world. In Algeria, the production of this cereal covers only 20 in 25 % of the needs for the country, the rest being imported. To improve the efficiency and the productivity of the durum wheat, the farmers turn to the use of pesticides: weed-killers, fungicides and insecticides. However this use often entrains losses of products more at least important contaminating the environment and all the food chain. Weed-killers are substances developed to control or destroy plants considered unwanted. That they are natural or produced by the human being (molecule of synthesis), the absorption and the metabolization of weed-killers by plants cause the death of these plants. In this work, we set as goal the evaluation of the effect of a weed-killer sulfonylurea, the CossackOD with various concentrations (0, 2, 4 and 9 µg) on variety of Triticum durum: Cirta. We evaluated the plant growth by measuring the leaves and root length, compared with the witness as well as the content of proline and analyze the level of one of the antioxydative enzymes: catalase, after 14 days of treatment. Sulfonylurea is foliar and root weed-killers inhibiting the acetolactate synthase: a vegetable enzyme essential to the development of the plant. This inhibition causes the ruling of the growth then the death. The obtained results show a diminution of the average length of leaves and roots this can be explained by the fact that the ALS inhibitors are more active in the young and increasing regions of the plant, what inhibits the cellular division and talks a limitation of the foliar and root’s growth. We also recorded a highly significant increase in the proline levels and a stimulation of the catalase activity. As a response to increasing the herbicide concentrations a particular increases in antioxidative mechanisms in wheat cultivar Cirta suggest that the high sensitivity of Cirta to this sulfonylurea herbicide is related to the enhanced production and oxidative damage of reactive oxygen species.Keywords: sulfonylurea, triticum durum, oxydative stress, toxicity
Procedia PDF Downloads 4131045 The Impact of Sign Language on Generating and Maintaining a Mental Image
Authors: Yi-Shiuan Chiu
Abstract:
Deaf signers have been found to have better mental image performance than hearing nonsigners. The goal of this study was to investigate the ability to generate mental images, to maintain them, and to manipulate them in deaf signers of Taiwanese Sign Language (TSL). In the visual image task, participants first memorized digits formed in a cell of 4 × 5 grids. After presenting a cue of Chinese digit character shown on the top of a blank cell, participants had to form a corresponding digit. When showing a probe, which was a grid containing a red circle, participants had to decide as quickly as possible whether the probe would have been covered by the mental image of the digit. The ISI (interstimulus interval) between cue and probe was manipulated. In experiment 1, 24 deaf signers and 24 hearing nonsigners were asked to perform image generation tasks (ISI: 200, 400 ms) and image maintenance tasks (ISI: 800, 2000 ms). The results showed that deaf signers had had an enhanced ability to generate and maintain a mental image. To explore the process of mental image, in experiment 2, 30 deaf signers and 30 hearing nonsigners were asked to do visual searching when maintaining a mental image. Between a digit image cue and a red circle probe, participants were asked to search a visual search task to see if a target triangle apex was directed to the right or left. When there was only one triangle in the searching task, the results showed that both deaf signers and hearing non-signers had similar visual searching performance in which the searching targets in the mental image locations got facilitates. However, deaf signers could maintain better and faster mental image performance than nonsigners. In experiment 3, we increased the number of triangles to 4 to raise the difficulty of the visual search task. The results showed that deaf participants performed more accurately in visual search and image maintenance tasks. The results suggested that people may use eye movements as a mnemonic strategy to maintain the mental image. And deaf signers had enhanced abilities to resist the interference of eye movements in the situation of fewer distractors. In sum, these findings suggested that deaf signers had enhanced mental image processing.Keywords: deaf signers, image maintain, mental image, visual search
Procedia PDF Downloads 1541044 Effect of Juvenile Hormone on Respiratory Metabolism during Non-Diapausing Sesamia cretica Wandering Larvae (Lepidoptera: Noctuidae)
Authors: E. A. Abdel-Hakim
Abstract:
The corn stemborer Sesamia cretica (Lederer), has been viewed in many parts of the world as a major pest of cultivated maize, graminaceous crops and sugarcane. Its life cycle is comprised of two different phases, one is the growth and developmental phase (non-diapause) and the other is diapause phase which takes place at the last larval instar. Several problems associated with the use of conventional insecticides, have strongly demonstrated the need for applying alternative safe compounds. Prominent among the prototypes of such prospective chemicals are the juvenoids; i.e. the insect (JH) mimics. In fact, the hormonal effect on metabolism has long been viewed as a secondary consequence of its direct action on specific energy-requiring biosynthetic mechanisms. Therefore, the present study was undertaken essentially in a rather systematic fashion as a contribution towards clarifying metabolic and energetic changes taking place during non-diapause wandering larvae as regulated by (JH) mimic. For this purpose, we applied two different doses of JH mimic (Ro 11-0111) in a single (standard) dose of 100µg or in a single dose of 20 µg/g bw in1µl acetone topically at the onset of nondiapause wandering larvae (WL). Energetic data were obtained by indirect calorimetry methods by conversion of respiratory gas exchange volumetric data, as measured manometrically using a Warburg constant respirometer, to caloric units (g-cal/g fw/h). The findings obtained can be given in brief; these treated larvae underwent supernumerary larval moults. However, this potential the wandering larvae proved to possess whereby restoration of larval programming for S. cretica to overcome stresses even at this critical developmental period. The results obtained, particularly with the high dose used, show that 98% wandering larvae were rescued to survive up to one month (vs. 5 days for normal controls), finally the formation of larval-adult intermediates. Also, the solvent controls had resulted in about 22% additional, but stationary moultings. The basal respiratory metabolism (O2 uptake and CO2 output) of the (WL), whether un-treated or larvae not had followed reciprocal U-shaped curves all along of their developmental duration. The lowest points stood nearly to the day of prepupal formation (571±187 µl O2/gfw/h and 553±181 µl CO2/gfw/h) during un-treated in contrast to the larvae treated with JH (210±48 µl O2/gfw/h and 335±81 µl CO2/gfw/h). Un-treated (normal) larvae proved to utilize carbohydrates as the principal source for energy supply; being fully oxidised without sparing any appreciable amount for endergonic conversion to fats. While, the juvenoid-treated larvae and compared with the acetone-treated control equivalents, there existed no distinguishable differences between them; both had been observed utilising carbohydrates as the sole source of energy demand and converting endergonically almost similar percentages to fats. The overall profile, treated and un-treated (WL) utilized carbohydrates as the principal source for energy demand during this stage.Keywords: juvenile hormone, respiratory metabolism, Sesamia cretica, wandering phase
Procedia PDF Downloads 2941043 Intrinsic Motivational Factor of Students in Learning Mathematics and Science Based on Electroencephalogram Signals
Authors: Norzaliza Md. Nor, Sh-Hussain Salleh, Mahyar Hamedi, Hadrina Hussain, Wahab Abdul Rahman
Abstract:
Motivational factor is mainly the students’ desire to involve in learning process. However, it also depends on the goal towards their involvement or non-involvement in academic activity. Even though, the students’ motivation might be in the same level, but the basis of their motivation may differ. In this study, it focuses on the intrinsic motivational factor which student enjoy learning or feeling of accomplishment the activity or study for its own sake. The intrinsic motivational factor of students in learning mathematics and science has found as difficult to be achieved because it depends on students’ interest. In the Program for International Student Assessment (PISA) for mathematics and science, Malaysia is ranked as third lowest. The main problem in Malaysian educational system, students tend to have extrinsic motivation which they have to score in exam in order to achieve a good result and enrolled as university students. The use of electroencephalogram (EEG) signals has found to be scarce especially to identify the students’ intrinsic motivational factor in learning science and mathematics. In this research study, we are identifying the correlation between precursor emotion and its dynamic emotion to verify the intrinsic motivational factor of students in learning mathematics and science. The 2-D Affective Space Model (ASM) was used in this research in order to identify the relationship of precursor emotion and its dynamic emotion based on the four basic emotions, happy, calm, fear and sad. These four basic emotions are required to be used as reference stimuli. Then, in order to capture the brain waves, EEG device was used, while Mel Frequency Cepstral Coefficient (MFCC) was adopted to be used for extracting the features before it will be feed to Multilayer Perceptron (MLP) to classify the valence and arousal axes for the ASM. The results show that the precursor emotion had an influence the dynamic emotions and it identifies that most students have no interest in mathematics and science according to the negative emotion (sad and fear) appear in the EEG signals. We hope that these results can help us further relate the behavior and intrinsic motivational factor of students towards learning of mathematics and science.Keywords: EEG, MLP, MFCC, intrinsic motivational factor
Procedia PDF Downloads 3671042 Implementation of Deep Neural Networks for Pavement Condition Index Prediction
Authors: M. Sirhan, S. Bekhor, A. Sidess
Abstract:
In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction
Procedia PDF Downloads 1371041 The Effects of Damping Devices on Displacements, Velocities and Accelerations of Structures
Authors: Radhwane Boudjelthia
Abstract:
The most recent earthquakes that occurred in the world and particularly in Algeria, have killed thousands of people and severe damage. The example that is etched in our memory is the last earthquake in the regions of Boumerdes and Algiers (Boumerdes earthquake of May 21, 2003). For all the actors involved in the building process, the earthquake is the litmus test for construction. The goal we set ourselves is to contribute to the implementation of a thoughtful approach to the seismic protection of structures. For many engineers, the most conventional approach protection works (buildings and bridges) the effects of earthquakes is to increase rigidity. This approach is not always effective, especially when there is a context that favors the phenomenon of resonance and amplification of seismic forces. Therefore, the field of earthquake engineering has made significant inroads among others catalyzed by the development of computational techniques in computer form and the use of powerful test facilities. This has led to the emergence of several innovative technologies, such as the introduction of special devices insulation between infrastructure and superstructure. This approach, commonly known as "seismic isolation" to absorb the significant efforts without the structure is damaged and thus ensuring the protection of lives and property. In addition, the restraints to the construction by the ground shaking are located mainly at the supports. With these moves, the natural period of construction is increasing, and seismic loads are reduced. Thus, there is an attenuation of the seismic movement. Likewise, the insulation of the base mechanism may be used in combination with earthquake dampers in order to control the deformation of the insulation system and the absolute displacement of the superstructure located above the isolation interface. On the other hand, only can use these earthquake dampers to reduce the oscillation amplitudes and thus reduce seismic loads. The use of damping devices represents an effective solution for the rehabilitation of existing structures. Given all these acceleration reducing means considered passive, much research has been conducted for several years to develop an active control system of the response of buildings to earthquakes.Keywords: earthquake, building, seismic forces, displacement, resonance, response
Procedia PDF Downloads 1271040 Displacement and Cultural Capital in East Harlem: Use of Community Space in Affordable Artist Housing
Authors: Jun Ha Whang
Abstract:
As New York City weathers a swelling 'affordability crisis' marked by rapid transformation in land development and urban culture, much of the associated scholarly debate has turned to questions of the underlying mechanisms of gentrification. Though classically approached from the point of view of urban planning, increasingly these questions have been addressed with an eye to understanding the role of cultural capital in neighborhood valuation. This paper will examine the construction of an artist-specific affordable housing development in the Spanish Harlem neighborhood of Manhattan in order to identify and discuss several cultural parameters of gentrification. This study’s goal is not to argue that the development in question, named Art space PS 109, straightforwardly increases or decreases the rate of gentrification in Spanish Harlem, but rather to study dynamics present in the construction of Art space PS 109 as a case study considered against the broader landscape of gentrification in New York, particularly with respect to the impact of artist communities on housing supply. In the end, what Art space PS 109 most valuably offers us is a reference point for a comparative analysis of affordable housing strategies currently being pursued within municipal government. Our study of Art space PS 109 has allowed us to examine a microcosm of the city’s response and evaluate its overall strategy accordingly. As a base line, the city must aggressively pursue an affordability strategy specifically suited to the needs of each of its neighborhoods. It must also conduct this in such a way so as not to undermine its own efforts by rendering them susceptible to the exploitative involvement of real estate developers seeking to identify successive waves of trendy neighborhoods. Though Art space PS 109 offers an invaluable resource for the city’s legitimate aim of preserving its artist communities, with such a high inclusion rate of artists from outside of the community the project risks additional displacement, strongly suggesting the need for further study of the implications of sites of cultural capital for neighborhood planning.Keywords: artist housing, displacement, east Harlem, urban planning
Procedia PDF Downloads 1631039 The Effect of Increased Tip Area of Suction Caissons on the Penetration Resistance Coefficients
Authors: Ghaem Zamani, Farveh Aghaye Nezhad, Amin Barari
Abstract:
The installation process of caissons has usually been a challenging step in the design phase, especially in the case of suction-assisted installation. The engineering practice for estimating the caisson penetration resistance is primarily controlled by the resistance governed by inner and outer skirt friction and the tip resistance. Different methods have been proposed in the literature to evaluate the above components, while the CPT-based methodology has attained notable popularity among others. In this method, two empirical coefficients are suggested, k𝒻 and kp, which relate the frictional resistance and tip resistance to the cone penetration resistance (q𝒸), respectively. A series of jacking installation and uninstallation experiments for different soil densities were carried out in the offshore geotechnical laboratory of Aalborg University, Denmark. The main goal of these tests was to find appropriate values for empirical coefficients of the CPT-based method for the buckets with large embedment ratio (i.e., d/D=1, where d is the skirt length and D is the diameter) and increased tip area penetrated into dense sand deposits. The friction resistance effects were isolated during the pullout experiments; hence, the k𝒻 was back-measured from the tests in the absence of tip resistance. The actuator force during jacking installation equals the sum of frictional resistance and tip resistance. Therefore, the tip resistance of the bucket is calculated by subtracting the back-measured frictional resistance from penetration resistance; hence the relevant coefficient kp would be achieved. The cone penetration test was operated at different points before and after each installation attempt to measure the cone penetration resistance (q𝒸), and the average value of q𝒸 is used for calculations. The experimental results of the jacking installation tests indicated that a larger friction area considerably increased the penetration resistance; however, this effect was completely diminished when foundation suction-assisted penetration was used. Finally, the values measured for the empirical coefficient of the CPT-based method are compared with the highest expected and most probable values suggested by DNV(1992) for uniform thickness buckets.Keywords: suction caisson, offshore geotechnics, cone penetration test, wind turbine foundation
Procedia PDF Downloads 841038 Resting-State Functional Connectivity Analysis Using an Independent Component Approach
Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi
Abstract:
Objective: Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. Methods: 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as independent component analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. Results: The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. Conclusion: This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.Keywords: ICA, RSN, refractory epilepsy, rsfMRI
Procedia PDF Downloads 761037 Outcomes of Pain Management for Patients in Srinagarind Hospital: Acute Pain Indicator
Authors: Chalermsri Sorasit, Siriporn Mongkhonthawornchai, Darawan Augsornwan, Sudthanom Kamollirt
Abstract:
Background: Although knowledge of pain and pain management is improving, they are still inadequate to patients. The Nursing Division of Srinagarind Hospital is responsible for setting the pain management system, including work instruction development and pain management indicators. We have developed an information technology program for monitoring pain quality indicators, which was implemented to all nursing departments in April 2013. Objective: To study outcomes of acute pain management in process and outcome indicators. Method: This is a retrospective descriptive study. The sample population was patients who had acute pain 24-48 hours after receiving a procedure, while admitted to Srinagarind Hospital in 2014. Data were collected from the information technology program. 2709 patients with acute pain from 10 Nursing Departments were recruited in the study. The research tools in this study were 1) the demographic questionnaire 2) the pain management questionnaire for process indicators, and 3) the pain management questionnaire for outcome indicators. Data were analyzed and presented by percentages and means. Results: The process indicators show that nurses used pain assessment tool and recorded 99.19%. The pain reassessment after the intervention was 96.09%. The 80.15% of the patients received opioid for pain medication and the most frequency of non-pharmacological intervention used was positioning (76.72%). For the outcome indicators, nearly half of them (49.90%) had moderate–severe pain, mean scores of worst pain was 6.48 and overall pain was 4.08. Patient satisfaction level with pain management was good (49.17%) and very good (46.62%). Conclusion: Nurses used pain assessment tools and pain documents which met the goal of the pain management process. Patient satisfaction with pain management was at high level. However the patients had still moderate to severe pain. Nurses should adhere more strictly to the guidelines of pain management, by using acute pain guidelines especially when pain intensity is particularly moderate-high. Nurses should also develop and practice a non-pharmacological pain management program to continually improve the quality of pain management. The information technology program should have more details about non-pharmacological pain techniques.Keywords: outcome, pain management, acute pain, Srinagarind Hospital
Procedia PDF Downloads 2321036 Reliability and Maintainability Optimization for Aircraft’s Repairable Components Based on Cost Modeling Approach
Authors: Adel A. Ghobbar
Abstract:
The airline industry is continuously challenging how to safely increase the service life of the aircraft with limited maintenance budgets. Operators are looking for the most qualified maintenance providers of aircraft components, offering the finest customer service. Component owner and maintenance provider is offering an Abacus agreement (Aircraft Component Leasing) to increase the efficiency and productivity of the customer service. To increase the customer service, the current focus on No Fault Found (NFF) units must change into the focus on Early Failure (EF) units. Since the effect of EF units has a significant impact on customer satisfaction, this needs to increase the reliability of EF units at minimal cost, which leads to the goal of this paper. By identifying the reliability of early failure (EF) units with regards to No Fault Found (NFF) units, in particular, the root cause analysis with an integrated cost analysis of EF units with the use of a failure mode analysis tool and a cost model, there will be a set of EF maintenance improvements. The data used for the investigation of the EF units will be obtained from the Pentagon system, an Enterprise Resource Planning (ERP) system used by Fokker Services. The Pentagon system monitors components, which needs to be repaired from Fokker aircraft owners, Abacus exchange pool, and commercial customers. The data will be selected on several criteria’s: time span, failure rate, and cost driver. When the selected data has been acquired, the failure mode and root cause analysis of EF units are initiated. The failure analysis approach tool was implemented, resulting in the proposed failure solution of EF. This will lead to specific EF maintenance improvements, which can be set-up to decrease the EF units and, as a result of this, increasing the reliability. The investigated EFs, between the time period over ten years, showed to have a significant reliability impact of 32% on the total of 23339 unscheduled failures. Since the EFs encloses almost one-third of the entire population.Keywords: supportability, no fault found, FMEA, early failure, availability, operational reliability, predictive model
Procedia PDF Downloads 1271035 The Effect of Artificial Intelligence on Digital Factory
Authors: Sherif Fayez Lewis Ghaly
Abstract:
up to datefacupupdated planning has the mission of designing merchandise, plant life, procedures, enterprise, regions, and the development of a up to date. The requirements for up-to-date planning and the constructing of a updated have changed in recent years. everyday restructuring is turning inupupdated greater essential up-to-date hold the competitiveness of a manufacturing facilityupdated. restrictions in new regions, shorter existence cycles of product and manufacturing generation up-to-date a VUCA global (Volatility, Uncertainty, Complexity & Ambiguity) up-to-date greater frequent restructuring measures inside a manufacturing facilityupdated. A virtual up-to-date model is the making plans basis for rebuilding measures and up-to-date an fundamental up-to-date. short-time period rescheduling can now not be handled through on-web site inspections and manual measurements. The tight time schedules require 3177227fc5dac36e3e5ae6cd5820dcaa making plans fashions. updated the high variation fee of facup-to-dateries defined above, a method for rescheduling facupdatedries on the idea of a modern-day digital up to datery dual is conceived and designed for sensible software in updated restructuring projects. the point of interest is on rebuild approaches. The purpose is up-to-date preserve the planning basis (virtual up-to-date model) for conversions within a up to datefacupupdated updated. This calls for the application of a methodology that reduces the deficits of present techniques. The goal is up-to-date how a digital up to datery version may be up to date up to date during ongoing up to date operation. a method up-to-date on phoup to dategrammetry technology is presented. the focus is on developing a easy and fee-powerful up to date tune the numerous adjustments that arise in a manufacturing unit constructing in the course of operation. The method is preceded with the aid of a hardware and software assessment up-to-date become aware of the most cost effective and quickest version.Keywords: building information modeling, digital factory model, factory planning, maintenance digital factory model, photogrammetry, restructuring
Procedia PDF Downloads 281034 Ecolodging as an Answer for Sustainable Development and Successful Resource Management: The Case of North West Coast in Alexandria
Authors: I. Elrouby
Abstract:
The continued growth of tourism in the future relies on maintaining a clean environment by achieving sustainable development. The erosion and degradation of beaches, the deterioration of coastal water quality, visual pollution of coastlines by massive developments, all this has contributed heavily to the loss of the natural attractiveness for tourism. In light of this, promoting the concept of sustainable coastal development is becoming a central goal for governments and private sector. An ecolodge is a small hotel or guesthouse that incorporates local architectural, cultural and natural characteristics, promotes environmental conservation through minimizing the use of waste and energy and produces social and economic benefits for local communities. Egypt has some scattered attempts in some areas like Sinai in the field of ecolodging. This research tends to investigate the potentials of the North West Coast (NWC) in Alexandria as a new candidate for ecolodging investments. The area is full of primitive natural and man-made resources. These, if used in an environmental-friendly way could achieve cost reductions as a result of successful resource management for investors on the one hand, and coastal preservation on the other hand. In-depth interviews will be conducted with stakeholders in the tourism sector to examine their opinion about the potentials of the research area for ecolodging developments. The candidates will be also asked to rate the importance of the availability of certain environmental aspects in such establishments such as the uses of resources that originate from local communities, uses of natural power sources, uses of an environmental-friendly sewage disposal, forbidding the use of materials of endangered species and enhancing cultural heritage conservation. The results show that the area is full of potentials that could be effectively used for ecolodging investments. This if efficiently used could attract ecotourism as a supplementary type of tourism that could be promoted in Alexandria aside cultural, recreational and religious tourism.Keywords: Alexandria, ecolodging, ecotourism, sustainability
Procedia PDF Downloads 2001033 Assessment of the Risks of Environmental Factors on the Health of Kazakhstan Cities in Promoting the Sustainable Development Goals
Authors: Rassima Salimbayeva, Kaliash Stamkulova, Gulparshyn Satbayeva
Abstract:
In order to adapt projects to promote Sustainable Development Goal 11. «Ensuring openness, security, resilience and environmental sustainability of cities and human settlements», presented in the UN Concept, it is necessary to assess the environmental sustainability of cities. From the analysis of the problems of sustainable development of cities in Kazakhstan, it can be seen that the industrial past created a typical range of problems -transport, housing, environment, and, importantly, image. Currently, the issue of air pollution in cities whose economies are dominated by one industry or company should be studied in more detail at the level of projects. In this research, using ecological, economic, and social indicators of five single-industry towns of the Karaganda region of Kazakhstan, an assessment of the risks of the negative impact of environmental factors on the health of the population was carried out, including by paying special attention to air quality. In order to investigate the relationship between the structure of industry, environmental pressure, and environmental sustainability of resource-oriented cities, an analysis of the main components was carried out to measure the structure of industry, environmental stress, and environmental sustainability of single-industry towns. It has been established that in resource-based cities, economic growth mainly depends on the development of one main industry, which primarily depends on local natural resources. Empirical results show that the regional structure of industry has a significant negative impact on the environmental sustainability of cities, in particular on the health of the population living in them. The paper complements the study of the theory of urban sustainability and clarifies the relationship between industrial structure and environmental pressure on health safety and environmental sustainability of cities and towns, which is crucial for further promoting the "green" development of single-industry towns based on natural resources.Keywords: public health risks, urban sustainability, suspended solids, single-industry towns, atmospheric air, environmental pollution
Procedia PDF Downloads 141032 Reduplication In Urdu-Hindi Nonsensical Words: An OT Analysis
Authors: Riaz Ahmed Mangrio
Abstract:
Reduplication in Urdu-Hindi affects all major word categories, particles, and even nonsensical words. It conveys a variety of meanings, including distribution, emphasis, iteration, adjectival and adverbial. This study will primarily discuss reduplicative structures of nonsensical words in Urdu-Hindi and then briefly look at some examples from other Indo-Aryan languages to introduce the debate regarding the same structures in them. The goal of this study is to present counter-evidence against Keane (2005: 241), who claims “the base in the cases of lexical and phrasal echo reduplication is always independently meaningful”. However, Urdu-Hindi reduplication derives meaningful compounds from nonsensical words e.g. gũ mgũ (A) ‘silent and confused’ and d̪əb d̪əb-a (N) ‘one’s fear over others’. This needs a comprehensive examination to see whether and how the various structures form patterns of a base-reduplicant relationship or, rather, they are merely sub lexical items joining together to form a word pattern of any grammatical category in content words. Another interesting theoretical question arises within the Optimality framework: in an OT analysis, is it necessary to identify one of the two constituents as the base and the other as reduplicant? Or is it best to consider this a pattern, but then how does this fit in with an OT analysis? This may be an even more interesting theoretical question. Looking for the solution to such questions can serve to make an important contribution. In the case at hand, each of the two constituents is an independent nonsensical word, but their echo reduplication is nonetheless meaningful. This casts significant doubt upon Keane’s (2005: 241) observation of some examples from Hindi and Tamil reduplication that “the base in cases of lexical and phrasal echo reduplication is always independently meaningful”. The debate on the point becomes further interesting when the triplication of nonsensical words in Urdu-Hindi e.g. aẽ baẽ ʃaẽ (N) ‘useless talk’ is also seen, which is equally important to discuss. The example is challenging to Harrison’s (1973) claim that only the monosyllabic verbs in their progressive forms reduplicate twice to result in triplication, which is not the case with the example presented. The study will consist of a thorough descriptive analysis of the data for the purpose of documentation, and then there will be OT analysis.Keywords: reduplication, urdu-hindi, nonsensical, optimality theory
Procedia PDF Downloads 751031 Relocation of Livestocks in Rural of Canakkale Province Using Remote Sensing and GIS
Authors: Melis Inalpulat, Tugce Civelek, Unal Kizil, Levent Genc
Abstract:
Livestock production is one of the most important components of rural economy. Due to the urban expansion, rural areas close to expanding cities transform into urban districts during the time. However, the legislations have some restrictions related to livestock farming in such administrative units since they tend to create environmental concerns like odor problems resulted from excessive manure production. Therefore, the existing animal operations should be moved from the settlement areas. This paper was focused on determination of suitable lands for livestock production in Canakkale province of Turkey using remote sensing (RS) data and GIS techniques. To achieve the goal, Formosat 2 and Landsat 8 imageries, Aster DEM, and 1:25000 scaled soil maps, village boundaries, and village livestock inventory records were used. The study was conducted using suitability analysis which evaluates the land in terms of limitations and potentials, and suitability range was categorized as Suitable (S) and Non-Suitable (NS). Limitations included the distances from main and crossroads, water resources and settlements, while potentials were appropriate values for slope, land use capability and land use land cover status. Village-based S land distribution results were presented, and compared with livestock inventories. Results showed that approximately 44230 ha area is inappropriate because of the distance limitations for roads and etc. (NS). Moreover, according to LULC map, 71052 ha area consists of forests, olive and other orchards, and thus, may not be suitable for building such structures (NS). In comparison, it was found that there are a total of 1228 ha S lands within study area. The village-based findings indicated that, in some villages livestock production continues on NS areas. Finally, it was suggested that organized livestock zones may be constructed to serve in more than one village after the detailed analysis complemented considering also political decisions, opinion of the local people, etc.Keywords: GIS, livestock, LULC, remote sensing, suitable lands
Procedia PDF Downloads 2981030 Zero Energy Buildings in Hot-Humid Tropical Climates: Boundaries of the Energy Optimization Grey Zone
Authors: Nakul V. Naphade, Sandra G. L. Persiani, Yew Wah Wong, Pramod S. Kamath, Avinash H. Anantharam, Hui Ling Aw, Yann Grynberg
Abstract:
Achieving zero-energy targets in existing buildings is known to be a difficult task requiring important cuts in the building energy consumption, which in many cases clash with the functional necessities of the building wherever the on-site energy generation is unable to match the overall energy consumption. Between the building’s consumption optimization limit and the energy, target stretches a case-specific optimization grey zone, which requires tailored intervention and enhanced user’s commitment. In the view of the future adoption of more stringent energy-efficiency targets in the context of hot-humid tropical climates, this study aims to define the energy optimization grey zone by assessing the energy-efficiency limit in the state-of-the-art typical mid- and high-rise full AC office buildings, through the integration of currently available technologies. Energy models of two code-compliant generic office-building typologies were developed as a baseline, a 20-storey ‘high-rise’ and a 7-storey ‘mid-rise’. Design iterations carried out on the energy models with advanced market ready technologies in lighting, envelope, plug load management and ACMV systems and controls, lead to a representative energy model of the current maximum technical potential. The simulations showed that ZEB targets could be achieved in fully AC buildings under an average of seven floors only by compromising on energy-intense facilities (as full AC, unlimited power-supply, standard user behaviour, etc.). This paper argues that drastic changes must be made in tropical buildings to span the energy optimization grey zone and achieve zero energy. Fully air-conditioned areas must be rethought, while smart technologies must be integrated with an aggressive involvement and motivation of the users to synchronize with the new system’s energy savings goal.Keywords: energy simulation, office building, tropical climate, zero energy buildings
Procedia PDF Downloads 1841029 Exploring Marine Bacteria in the Arabian Gulf Region for Antimicrobial Metabolites
Authors: Julie Connelly, Tanvi Toprani, Xin Xie, Dhinoth Kumar Bangarusamy, Kris C. Gunsalus
Abstract:
The overuse of antibiotics worldwide has contributed to the development of multi-drug resistant (MDR) pathogenic bacterial strains. There is an increasing urgency to discover antibiotics to combat MDR pathogens. The microbiome of the Arabian Gulf is a largely unexplored and potentially rich source of novel bioactive compounds. Microbes that inhabit the Abu Dhabi coastal regions adapt to extreme environments with high salinity, hot temperatures, large temperature fluctuations, and acute exposure to solar energy. The microbes native to this region may produce unique metabolites with therapeutic potential as antibiotics and antifungals. We have isolated 200 pure bacterial strains from mangrove sediments, cyanobacterial mats, and coral reefs of the Abu Dhabi region. In this project, we aim to screen the marine bacterial strains to identify antibiotics, in particular undocumented compounds that show activity against existing antibiotic-resistant strains. We have acquired the ESKAPE pathogen panel, which consists of six antibiotic-resistant gram-positive and gram-negative bacterial pathogens that collectively cause most clinical infections. Our initial efforts of the primary screen using colony-picking co-culture assay have identified several candidate marine strains producing potential antibiotic compounds. We will next apply different assays, including disk-diffusion and broth turbidity growth assay, to confirm the results. This will be followed by bioactivity-guided purification and characterization of target compounds from the scaled-up volume of candidate strains, including SPE fraction, HPLC fraction, LC-MS, and NMR. For antimicrobial compounds with unknown structures, our final goal is to investigate their mode of action by identifying the molecular target.Keywords: marine bacteria, natural products, drug discovery, ESKAPE panel
Procedia PDF Downloads 751028 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1271027 Conviviality as a Principle in Natural and Social Realms
Authors: Xiao Wen Xu
Abstract:
There exists a challenge of accommodating/integrating people at risk and those from various backgrounds in urban areas. The success of interdependence as a tool for survival largely rests on the mutually beneficial relationships amongst individuals within a given society. One approach to meeting this challenge has been written by Ivan Illich in his book, Tools for Conviviality, where he defines 'conviviality' as interactions that help individuals. With the goal of helping the community and applying conviviality as a principle to actors in both natural and social realms of Moss Park in Toronto, the proposal involves redesigning the park and buildings as a series of different health care, extended learning, employment support, armoury, and recreation facilities that integrate the exterior landscape as treatment, teaching, military, and recreation areas; in other words, the proposal links services with access to park space. While buildings are traditionally known to physically provide shelter, parks embody shelter and act as service, as people often find comfort and relief from being in nature, and Moss Park, in particular, is home to many people at risk. This landscape is not only an important space for the homeless community but also the rest of the neighborhood. The thesis proposes that the federal government rebuilds the current armoury, as it is an obsolete building while acknowledging the extensive future developments proposed by developers and its impact on public space. The neighbourhood is an underserved area, and the new design develops not just a new armoury, but also a complex of interrelated services, which are completely integrated into the park. The armoury is redesigned as an integral component of the community that not only serves as training facilities for reservists but also serves as an emergency shelter in sub-zero temperatures for the homeless community. This paper proposes a new design for Moss Park through examining how 'park buildings', interconnected buildings and parks, can foster empowering relationships that create a supportive public realm.Keywords: conviviality, natural, social, Ivan Illich
Procedia PDF Downloads 4031026 Nanopriming Potential of Metal Nanoparticles against Internally Seed Borne Pathogen Ustilago triciti
Authors: Anjali Sidhu, Anju Bala, Amit Kumar
Abstract:
Metal nanoparticles have the potential to revolutionize the agriculture owing to sizzling interdisciplinary nano-technological application domain. Numerous patents and products incorporating engineered nanoparticles (NPs) entered into agro-applications with the collective goal to promote proficiency as well as sustainability with lower input and generating meager waste than conventional products and approaches. Loose smut of wheat caused by Ustilago segetum tritici is an internally seed-borne pathogen. It is dormant in the seed unless the seed germinates and its symptoms are expressed at the reproductive stage of the plant only. Various seed treatment agents are recommended for this disease but due to the inappropriate methods of seed treatments used by farmers, each and every seed may not get treated, and the infected seeds escape the fungicidal action. The antimicrobial potential and small size of nanoparticles made them the material of choice as they could enter each seed and restrict the pathogen inside the seed due to the availability of more number of nanoparticles per unit volume of the nanoformulations. Nanoparticles of diverse nature known for their in vitro antimicrobial activity viz. ZnO, MgO, CuS and AgNPs were synthesized, surface modified and characterized by traditional methods. They were applied on infected wheat seeds which were then grown in pot conditions, and their mycelium was tracked in the shoot and leaf region of the seedlings by microscopic staining techniques. Mixed responses of inhibition of this internal mycelium were observed. The time and method of application concluded to be critical for application, which was optimised in the present work. The results implicated that there should be field trails to get final fate of these pot trails up to commercial level. The success of their field trials could be interpreted as a revolution to replace high dose organic fungicides of high residue behaviour.Keywords: metal nanoparticles, nanopriming, seed borne pathogen, Ustilago segetum tritici
Procedia PDF Downloads 1441025 Advancing Inclusive Curriculum Development for Special Needs Education in Africa
Authors: Onosedeba Mary Ayayia
Abstract:
Inclusive education has emerged as a critical global imperative, aiming to provide equitable educational opportunities for all, regardless of their abilities or disabilities. In Africa, the pursuit of inclusive education faces significant challenges, particularly concerning the development and implementation of inclusive curricula tailored to the diverse needs of students with disabilities. This study delves into the heart of this issue, seeking to address the pressing problem of exclusion and marginalization of students with disabilities in mainstream educational systems across the continent. The problem is complex, entailing issues of limited access to tailored curricula, shortages of qualified teachers in special needs education, stigmatization, limited research and data, policy gaps, inadequate resources, and limited community awareness. These challenges perpetuate a system where students with disabilities are systematically excluded from quality education, limiting their future opportunities and societal contributions. This research proposes a comprehensive examination of the current state of inclusive curriculum development and implementation in Africa. Through an innovative and explicit exploration of the problem, the study aims to identify effective strategies, guidelines, and best practices that can inform the development of inclusive curricula. These curricula will be designed to address the diverse learning needs of students with disabilities, promote teacher capacity building, combat stigmatization, generate essential data, enhance policy coherence, allocate adequate resources, and raise community awareness. The goal of this research is to contribute to the advancement of inclusive education in Africa by fostering an educational environment where every student, regardless of ability or disability, has equitable access to quality education. Through this endeavor, the study aligns with the broader global pursuit of social inclusion and educational equity, emphasizing the importance of inclusive curricula as a foundational step towards a more inclusive and just society.Keywords: inclusive education, special education, curriculum development, Africa
Procedia PDF Downloads 641024 Modeling of Cf-252 and PuBe Neutron Sources by Monte Carlo Method in Order to Develop Innovative BNCT Therapy
Authors: Marta Błażkiewicz, Adam Konefał
Abstract:
Currently, boron-neutron therapy is carried out mainly with the use of a neutron beam generated in research nuclear reactors. This fact limits the possibility of realization of a BNCT in centers distant from the above-mentioned reactors. Moreover, the number of active nuclear reactors in operation in the world is decreasing due to the limited lifetime of their operation and the lack of new installations. Therefore, the possibilities of carrying out boron-neutron therapy based on the neutron beam from the experimental reactor are shrinking. However, the use of nuclear power reactors for BNCT purposes is impossible due to the infrastructure not intended for radiotherapy. Therefore, a serious challenge is to find ways to perform boron-neutron therapy based on neutrons generated outside the research nuclear reactor. This work meets this challenge. Its goal is to develop a BNCT technique based on commonly available neutron sources such as Cf-252 and PuBe, which will enable the above-mentioned therapy in medical centers unrelated to nuclear research reactors. Advances in the field of neutron source fabrication make it possible to achieve strong neutron fluxes. The current stage of research focuses on the development of virtual models of the above-mentioned sources using the Monte Carlo simulation method. In this study, the GEANT4 tool was used, including the model for simulating neutron-matter interactions - High Precision Neutron. Models of neutron sources were developed on the basis of experimental verification based on the activation detectors method with the use of indium foil and the cadmium differentiation method allowing to separate the indium activation contribution from thermal and resonance neutrons. Due to the large number of factors affecting the result of the verification experiment, the 10% discrepancy between the simulation and experiment results was accepted.Keywords: BNCT, virtual models, neutron sources, monte carlo, GEANT4, neutron activation detectors, gamma spectroscopy
Procedia PDF Downloads 1861023 Nutrition Budgets in Uganda: Research to Inform Implementation
Authors: Alexis D'Agostino, Amanda Pomeroy
Abstract:
Background: Resource availability is essential to effective implementation of national nutrition policies. To this end, the SPRING Project has collected and analyzed budget data from government ministries in Uganda, international donors, and other nutrition implementers to provide data for the first time on what funding is actually allocated to implement nutrition activities named in the national nutrition plan. Methodology: USAID’s SPRING Project used the Uganda Nutrition Action Plan (UNAP) as the starting point for budget analysis. Thorough desk reviews of public budgets from government, donors, and NGOs were mapped to activities named in the UNAP and validated by key informants (KIs) across the stakeholder groups. By relying on nationally-recognized and locally-created documents, SPRING provided a familiar basis for discussions to increase credibility and local ownership of findings. Among other things, the KIs validated the amount, source, and type (specific or sensitive) of funding. When only high-level budget data were available, KIs provided rough estimates of the percentage of allocations that were actually nutrition-relevant, allowing creation of confidence intervals around some funding estimates. Results: After validating data and narrowing in on estimates of funding to nutrition-relevant programming, researchers applied a formula to estimate overall nutrition allocations. In line with guidance by the SUN Movement and its three-step process, nutrition-specific funding was counted at 100% of its allocation amount, while nutrition sensitive funding was counted at 25%. The vast majority of nutrition funding in Uganda is off-budget, with over 90 percent of all nutrition funding is provided outside of the government system. Overall allocations are split nearly evenly between nutrition-specific and –sensitive activities. In FY 2013/14, the two-year study’s baseline year, on- and off-budget funding for nutrition was estimated to be around 60 million USD. While the 60 million USD allocations compare favorably to the 66 million USD estimate of the cost of the UNAP, not all activities are sufficiently funded. Those activities with a focus on behavior change were the most underfunded. In addition, accompanying qualitative research suggested that donor funding for nutrition activities may shift government funding into other areas of work, making it difficult to estimate the sustainability of current nutrition investments.Conclusions: Beyond providing figures, these estimates can be used together with the qualitative results of the study to explain how and why these amounts were allocated for particular activities and not others, examine the negotiation process that occurred, and suggest options for improving the flow of finances to UNAP activities for the remainder of the policy tenure. By the end of the PBN study, several years of nutrition budget estimates will be available to compare changes in funding over time. Halfway through SPRING’s work, there is evidence that country stakeholders have begun to feel ownership over the ultimate findings and some ministries are requesting increased technical assistance in nutrition budgeting. Ultimately, these data can be used within organization to advocate for more and improved nutrition funding and to improve targeting of nutrition allocations.Keywords: budget, nutrition, financing, scale-up
Procedia PDF Downloads 446