Search results for: displacement measurement
418 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation
Authors: A. Zapata, Orlando Arroyo, R. Bonett
Abstract:
Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm
Procedia PDF Downloads 74417 A Study on the Measurement of Spatial Mismatch and the Influencing Factors of “Job-Housing” in Affordable Housing from the Perspective of Commuting
Authors: Daijun Chen
Abstract:
Affordable housing is subsidized by the government to meet the housing demand of low and middle-income urban residents in the process of urbanization and to alleviate the housing inequality caused by market-based housing reforms. It is a recognized fact that the living conditions of the insured have been improved while constructing the subsidized housing. However, the choice of affordable housing is mostly in the suburbs, where the surrounding urban functions and infrastructure are incomplete, resulting in the spatial mismatch of "jobs-housing" in affordable housing. The main reason for this problem is that the residents of affordable housing are more sensitive to the spatial location of their residence, but their selectivity and controllability to the housing location are relatively weak, which leads to higher commuting costs. Their real cost of living has not been effectively reduced. In this regard, 92 subsidized housing communities in Nanjing, China, are selected as the research sample in this paper. The residents of the affordable housing and their commuting Spatio-temporal behavior characteristics are identified based on the LBS (location-based service) data. Based on the spatial mismatch theory, spatial mismatch indicators such as commuting distance and commuting time are established to measure the spatial mismatch degree of subsidized housing in different districts of Nanjing. Furthermore, the geographically weighted regression model is used to analyze the influencing factors of the spatial mismatch of affordable housing in terms of the provision of employment opportunities, traffic accessibility and supporting service facilities by using spatial, functional and other multi-source Spatio-temporal big data. The results show that the spatial mismatch of affordable housing in Nanjing generally presents a "concentric circle" pattern of decreasing from the central urban area to the periphery. The factors affecting the spatial mismatch of affordable housing in different spatial zones are different. The main reasons are the number of enterprises within 1 km of the affordable housing district and the shortest distance to the subway station. And the low spatial mismatch is due to the diversity of services and facilities. Based on this, a spatial optimization strategy for different levels of spatial mismatch in subsidized housing is proposed. And feasible suggestions for the later site selection of subsidized housing are also provided. It hopes to avoid or mitigate the impact of "spatial mismatch," promote the "spatial adaptation" of "jobs-housing," and truly improve the overall welfare level of affordable housing residents.Keywords: affordable housing, spatial mismatch, commuting characteristics, spatial adaptation, welfare benefits
Procedia PDF Downloads 108416 An Experimental Investigation of Chemical Enhanced Oil Recovery (Ceor) for Fractured Carbonate Reservoirs, Case Study: Kais Formation on Wakamuk Field
Authors: Jackson Andreas Theo Pola, Leksono Mucharam, Hari Oetomo, Budi Susanto, Wisnu Nugraha
Abstract:
About half of the world oil reserves are located in carbonate reservoirs, where 65% of the total carbonate reservoirs are oil wet and 12% intermediate wet [1]. Oil recovery in oil wet or mixed wet carbonate reservoirs can be increased by dissolving surfactant to injected water to change the rock wettability from oil wet to more water wet. The Wakamuk Field operated by PetroChina International (Bermuda) Ltd. and PT. Pertamina EP in Papua, produces from main reservoir of Miocene Kais Limestone. First production commenced on August, 2004 and the peak field production of 1456 BOPD occurred in August, 2010. It was found that is a complex reservoir system and until 2014 cumulative oil production was 2.07 MMBO, less than 9% of OOIP. This performance is indicative of presence of secondary porosity, other than matrix porosity which is of low average porosity 13% and permeability less than 7 mD. Implementing chemical EOR in this case is the best way to increase oil production. However, the selected chemical must be able to lower the interfacial tension (IFT), reduce oil viscosity, and alter the wettability; thus a special chemical treatment named SeMAR has been proposed. Numerous laboratory tests such as phase behavior test, core compatibility test, mixture viscosity, contact angle measurement, IFT, imbibitions test and core flooding were conducted on Wakamuk field samples. Based on the spontaneous imbibitions results for Wakamuk field core, formulation of SeMAR with compositional S12A gave oil recovery 43.94% at 1wt% concentration and maximum percentage of oil recovery 87.3% at 3wt% concentration respectively. In addition, the results for first scenario of core flooding test gave oil recovery 60.32% at 1 wt% concentration S12A and the second scenario gave 96.78% of oil recovery at concentration 3 wt% respectively. The soaking time of chemicals has a significant effect on the recovery and higher chemical concentrations affect larger areas for wettability and therefore, higher oil recovery. The chemical that gives best overall results from laboratory tests study will also be a consideration for Huff and Puff injections trial (pilot project) for increasing oil recovery from Wakamuk FieldKeywords: Wakamuk field, chemical treatment, oil recovery, viscosity
Procedia PDF Downloads 693415 The Verification Study of Computational Fluid Dynamics Model of the Aircraft Piston Engine
Authors: Lukasz Grabowski, Konrad Pietrykowski, Michal Bialy
Abstract:
This paper presents the results of the research to verify the combustion in aircraft piston engine Asz62-IR. This engine was modernized and a type of ignition system was developed. Due to the high costs of experiments of a nine-cylinder 1,000 hp aircraft engine, a simulation technique should be applied. Therefore, computational fluid dynamics to simulate the combustion process is a reasonable solution. Accordingly, the tests for varied ignition advance angles were carried out and the optimal value to be tested on a real engine was specified. The CFD model was created with the AVL Fire software. The engine in the research had two spark plugs for each cylinder and ignition advance angles had to be set up separately for each spark. The results of the simulation were verified by comparing the pressure in the cylinder. The courses of the indicated pressure of the engine mounted on a test stand were compared. The real course of pressure was measured with an optical sensor, mounted in a specially drilled hole between the valves. It was the OPTRAND pressure sensor, which was designed especially to engine combustion process research. The indicated pressure was measured in cylinder no 3. The engine was running at take-off power. The engine was loaded by a propeller at a special test bench. The verification of the CFD simulation results was based on the results of the test bench studies. The course of the simulated pressure obtained is within the measurement error of the optical sensor. This error is 1% and reflects the hysteresis and nonlinearity of the sensor. The real indicated pressure measured in the cylinder and the pressure taken from the simulation were compared. It can be claimed that the verification of CFD simulations based on the pressure is a success. The next step was to research on the impact of changing the ignition advance timing of spark plugs 1 and 2 on a combustion process. Moving ignition timing between 1 and 2 spark plug results in a longer and uneven firing of a mixture. The most optimal point in terms of indicated power occurs when ignition is simultaneous for both spark plugs, but so severely separated ignitions are assured that ignition will occur at all speeds and loads of engine. It should be confirmed by a bench experiment of the engine. However, this simulation research enabled us to determine the optimal ignition advance angle to be implemented into the ignition control system. This knowledge allows us to set up the ignition point with two spark plugs to achieve as large power as possible.Keywords: CFD model, combustion, engine, simulation
Procedia PDF Downloads 361414 Dynamic High-Rise Moment Resisting Frame Dissipation Performances Adopting Glazed Curtain Walls with Superelastic Shape Memory Alloy Joints
Authors: Lorenzo Casagrande, Antonio Bonati, Ferdinando Auricchio, Antonio Occhiuzzi
Abstract:
This paper summarizes the results of a survey on smart non-structural element dynamic dissipation when installed in modern high-rise mega-frame prototypes. An innovative glazed curtain wall was designed using Shape Memory Alloy (SMA) joints in order to increase the energy dissipation and enhance the seismic/wind response of the structures. The studied buildings consisted of thirty- and sixty-storey planar frames, extracted from reference three-dimensional steel Moment Resisting Frame (MRF) with outriggers and belt trusses. The internal core was composed of a CBF system, whilst outriggers were placed every fifteen stories to limit second order effects and inter-storey drifts. These structural systems were designed in accordance with European rules and numerical FE models were developed with an open-source code, able to account for geometric and material nonlinearities. With regard to the characterization of non-structural building components, full-scale crescendo tests were performed on aluminium/glass curtain wall units at the laboratory of the Construction Technologies Institute (ITC) of the Italian National Research Council (CNR), deriving force-displacement curves. Three-dimensional brick-based inelastic FE models were calibrated according to experimental results, simulating the fac¸ade response. Since recent seismic events and extreme dynamic wind loads have generated the large occurrence of non-structural components failure, which causes sensitive economic losses and represents a hazard for pedestrians safety, a more dissipative glazed curtain wall was studied. Taking advantage of the mechanical properties of SMA, advanced smart joints were designed with the aim to enhance both the dynamic performance of the single non-structural unit and the global behavior. Thus, three-dimensional brick-based plastic FE models were produced, based on the innovated non-structural system, simulating the evolution of mechanical degradation in aluminium-to-glass and SMA-to-glass connections when high deformations occurred. Consequently, equivalent nonlinear links were calibrated to reproduce the behavior of both tested and smart designed units, and implemented on the thirty- and sixty-storey structural planar frame FE models. Nonlinear time history analyses (NLTHAs) were performed to quantify the potential of the new system, when considered in the lateral resisting frame system (LRFS) of modern high-rise MRFs. Sensitivity to the structure height was explored comparing the responses of the two prototypes. Trends in global and local performance were discussed to show that, if accurately designed, advanced materials in non-structural elements provide new sources of energy dissipation.Keywords: advanced technologies, glazed curtain walls, non-structural elements, seismic-action reduction, shape memory alloy
Procedia PDF Downloads 329413 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI
Authors: Rutej R. Mehta, Michael A. Chappell
Abstract:
Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.Keywords: arterial spin labelling, dispersion, MRI, perfusion
Procedia PDF Downloads 371412 Performance Management in Public Administration on Chile and Portugal
Authors: Lilian Bambirra De Assis, Patricia Albuquerque Gomes, Kamila Pagel De Oliveira, Deborah Oliveira Santos, Marcelo Esteves Chaves Campos
Abstract:
This paper aimed to analyze how performance management occurs in the context of the modernization of the federal public sector in Chile and Portugal. To do so, the study was based on a theoretical framework that covers the modernization of public administration to performance management, passing on people management. The work consisted of qualitative-descriptive research in which 16 semi-structured interviews were applied in the countries of study and documents and legislation were used referring to the subject. Performance management, as well as other people management subsystems, is criticized for using private sector management tools, based on a results-driven logic. From this point of view, it is understood that certain practices of the private sector, regarding the measurement of performance, can not be simply inserted in the scenario of the public administration. Beyond this criticism, performance management can contribute to the achievement of the strategic objectives of the countries and its focus is upward, a trend that can be verified through the manuals produced; by the interest of consultants and professional organizations, both public and private; and OECD (Organization for Economic Cooperation and Development) evaluations. In Portugal, public administration reform was implemented during the Constitutional Government (2005-2009) and had as its objective the restructuring of human resources management, with an emphasis on its integration with budget management, which is an inclination of the OECD, while in Chile HRM (Human Resource Management) practices are directed to ministries to a lesser extent than the OECD average. The central human resources management sector, for the most part, coordinates policy but is also responsible for other issues, including payment and classification systems. Chile makes less use of strategic Human Resource Management practices than the average of OECD countries, and its prominence lies in the decentralization of public bodies, which may grant autonomy, but fragments the implementation of policies and practices in that country since they are not adopted by all organs. Through the analysis, it was possible to identify that Chile and Portugal have practices and personnel management policies that make reference to performance management, which is similar to other OECD countries. The study countries also have limitations to implement performance management and the results indicate that there are still processes to be perfected, such as performance appraisal and compensation.Keywords: management of people in the public sector, modernization of public administration, performance management in the public sector, HRM, OECD
Procedia PDF Downloads 152411 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer
Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo
Abstract:
Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer
Procedia PDF Downloads 208410 Flood Hazards, Vulnerability and Adaptations in Upper Imo River Basin of South Eastern Nigera Introduction
Authors: Christian N. Chibo
Abstract:
Imo River Basin is located in South Eastern Nigeria comprising of 11 states of Imo, Abia, Anambra, Ebonyi, Enugu, Edo, Rivers, Cross river, AkwaIbom, Bayelsa, Delta, and Bayelsa states. The basin has a fluvial erosional system dominated by powerful rivers coming down from steep slopes in the area. This research investigated various hazards associated with flood, the vulnerable areas, elements at risk of flood and various adaptation strategies adopted by local inhabitants to cope with the hazards. The research aim is to identify, examine and assess flood hazards, vulnerability and adaptations in the Upper Imo River Basin. The study identified the role of elevation in cause of flood, elements at risk of flood as well as examine the effectiveness or otherwise of the adaptation strategies for coping with the hazards. Data for this research is grouped as primary and secondary. Their various methods of generation are field measurement, questionnaire, library websites etc. Other types of data were generated from topographical, geological, and Digital Elevation model (DEM) maps, while the hydro meteorological data was sourced from Nigeria Meteorological Agency (NIMET), Meteorological stations of Geography and Environmental Management Departments of Imo State University and Alvan Ikoku Federal College of Education. 800 copies of questionnaire were distributed using systematic sampling to 8 locations used for the pilot survey. About 96% of the questionnaire were retrieved and used for the study. 13 flood events were identified in the study area. Their causes, years and dates of events were documented in the text, and the damages they caused were evaluated. The study established that for each flood event, there is over 200mm of rain observed on the day of the flood and the day before the flood. The study also observed that the areas that situate at higher elevation (See DEM) are less prone to flood hazards while areas at low elevations are more prone to flood hazards. Elements identified to be at risk of flood are agricultural land, residential dwellings, retail trading and related services, public buildings and community services. The study thereby recommends non settlement at flood plains and flood prone areas and rearrangement of land use activities in the upper Imo River Basin among othersKeywords: flood hazard, flood plain, geomorphology, Imo River Basin
Procedia PDF Downloads 303409 Exploration of Classic Models of Precipitation in Iran: A Case Study of Sistan and Baluchestan Province
Authors: Mohammad Borhani, Ahmad Jamshidzaei, Mehdi Koohsari
Abstract:
The study of climate has captivated human interest throughout history. In response to this fascination, individuals historically organized their daily activities in alignment with prevailing climatic conditions and seasonal variations. Understanding the elements and specific climatic parameters of each region, such as precipitation, which directly impacts human life, is essential because, in recent years, there has been a significant increase in heavy rainfall in various parts of the world attributed to the effects of climate change. Climate prediction models suggest a future scenario characterized by an increase in severe precipitation events and related floods on a global scale. This is a result of human-induced greenhouse gas emissions causing changes in the natural precipitation patterns. The Intergovernmental Panel on Climate Change reported global warming in 2001. The average global temperature has shown an increasing trend since 1861. In the 20th century, this increase has been between (0/2 ± 0/6) °C. The present study focused on examining the trend of monthly, seasonal, and annual precipitation in Sistan and Baluchestan provinces. The study employed data obtained from 13 precipitation measurement stations managed by the Iran Water Resources Management Company, encompassing daily precipitation records spanning the period from 1997 to 2016. The results indicated that the total monthly precipitation at the studied stations in Sistan and Baluchestan province follows a sinusoidal trend. The highest intense precipitation was observed in January, February, and March, while the lowest occurred in September, October, and then November. The investigation of the trend of seasonal precipitation in this province showed that precipitation follows an upward trend in the autumn season, reaching its peak in winter, and then shows a decreasing trend in spring and summer. Also, the examination of average precipitation indicated that the highest yearly precipitation occurred in 1997 and then in 2004, while the lowest annual precipitation took place between 1999 and 2001. The analysis of the annual precipitation trend demonstrates a decrease in precipitation from 1997 to 2016 in Sistan and Baluchestan province.Keywords: climate change, extreme precipitation, greenhouse gas, trend analysis
Procedia PDF Downloads 67408 A Long Range Wide Area Network-Based Smart Pest Monitoring System
Authors: Yun-Chung Yu, Yan-Wen Wang, Min-Sheng Liao, Joe-Air Jiang, Yuen-Chung Lee
Abstract:
This paper proposes to use a Long Range Wide Area Network (LoRaWAN) for a smart pest monitoring system which aims at the oriental fruit fly (Bactrocera dorsalis) to improve the communication efficiency of the system. The oriental fruit fly is one of the main pests in Southeast Asia and the Pacific Rim. Different smart pest monitoring systems based on the Internet of Things (IoT) architecture have been developed to solve problems of employing manual measurement. These systems often use Octopus II, a communication module following the 2.4GHz IEEE 802.15.4 ZigBee specification, as sensor nodes. The Octopus II is commonly used in low-power and short-distance communication. However, the energy consumption increase as the logical topology becomes more complicate to have enough coverage in the large area. By comparison, LoRaWAN follows the Low Power Wide Area Network (LPWAN) specification, which targets the key requirements of the IoT technology, such as secure bi-directional communication, mobility, and localization services. The LoRaWAN network has advantages of long range communication, high stability, and low energy consumption. The 433MHz LoRaWAN model has two superiorities over the 2.4GHz ZigBee model: greater diffraction and less interference. In this paper, The Octopus II module is replaced by a LoRa model to increase the coverage of the monitoring system, improve the communication performance, and prolong the network lifetime. The performance of the LoRa-based system is compared with a ZigBee-based system using three indexes: the packet receiving rate, delay time, and energy consumption, and the experiments are done in different settings (e.g. distances and environmental conditions). In the distance experiment, a pest monitoring system using the two communication specifications is deployed in an area with various obstacles, such as buildings and living creatures, and the performance of employing the two communication specifications is examined. The experiment results show that the packet receiving the rate of the LoRa-based system is 96% , which is much higher than that of the ZigBee system when the distance between any two modules is about 500m. These results indicate the capability of a LoRaWAN-based monitoring system in long range transmission and ensure the stability of the system.Keywords: LoRaWan, oriental fruit fly, IoT, Octopus II
Procedia PDF Downloads 352407 Optimizing PharmD Education: Quantifying Curriculum Complexity to Address Student Burnout and Cognitive Overload
Authors: Frank Fan
Abstract:
PharmD (Doctor of Pharmacy) education has confronted an increasing challenge — curricular overload, a phenomenon resulting from the expansion of curricular requirements, as PharmD education strives to produce graduates who are practice-ready. The aftermath of the global pandemic has amplified the need for healthcare professionals, leading to a growing trend of assigning more responsibilities to them to address the global healthcare shortage. For instance, the pharmacist’s role has expanded to include not only compounding and distributing medication but also providing clinical services, including minor ailments management, patient counselling and vaccination. Consequently, PharmD programs have responded by continually expanding their curricula adding more requirements. While these changes aim to enhance the education and training of future professionals, they have also led to unintended consequences, including curricular overload, student burnout, and a potential decrease in program quality. To address the issue and ensure program quality, there is a growing need for evidence-based curriculum reforms. My research seeks to integrate Cognitive Load Theory, emerging machine learning algorithms within artificial intelligence (AI), and statistical approaches to develop a quantitative framework for optimizing curriculum design within the PharmD program at the University of Toronto, the largest PharmD program within Canada, to provide quantification and measurement of issues that currently are only discussed in terms of anecdote rather than data. This research will serve as a guide for curriculum planners, administrators, and educators, aiding in the comprehension of how the pharmacy degree program compares to others within and beyond the field of pharmacy. It will also shed light on opportunities to reduce the curricular load while maintaining its quality and rigor. Given that pharmacists constitute the third-largest healthcare workforce, their education shares similarities and challenges with other health education programs. Therefore, my evidence-based, data-driven curriculum analysis framework holds significant potential for training programs in other healthcare professions, including medicine, nursing, and physiotherapy.Keywords: curriculum, curriculum analysis, health professions education, reflective writing, machine learning
Procedia PDF Downloads 61406 Emigration Improves Life Standard of Families Left Behind: An Evidence from Rural Area of Gujrat-Pakistan
Authors: Shoaib Rasool
Abstract:
Migration trends in rural areas of Gujrat are increasing day by day among illiterate people as they consider it as a source of attraction and charm of destination. It affects the life standard both positive and negative way to their families left behind in the context of poverty, socio-economic status and life standards. It also promotes material items and as well as social indicators of living, housing conditions, schooling of their children’s, health seeking behavior and to some extent their family environment. The nature of the present study is to analyze socio-economic conditions regarding life standard of emigrant families left behind in rural areas of Gujrat district, Pakistan. A survey design was used on 150 families selected from rural areas of Gujrat districts through purposive sampling technique. A well-structured questionnaire was administered by the researcher to explore the nature of the study and for further data collection process. The measurement tool was pretested on 20 families to check the workability and reliability before the actual data collection. Statistical tests were applied to draw results and conclusion. The preliminary findings of the study show that emigration has left deep social-economic impacts on life standards of rural families left behind in Gujrat. They improved their life status and living standard through remittances. Emigration is one of the major sources of development of economy of household and it also alleviate poverty at house household level as well as community and country level. The rationale behind migration varies individually and geographically. There are popular considered attractions in Pakistan includes securing high status, improvement in health condition, coping other, getting married then to acquire nationality, using the unfair means, opting educational visas etc. Emigrants are not only sending remittances but also returning with newly acquired skills and valuable knowledge to their country of origin because emigrants learn new methods of living and working. There are also women migrants who experience social downward mobility by engaging in jobs that are beneath their educational qualifications.Keywords: emigration, life standard, families, left behind, rural area, Gujrat
Procedia PDF Downloads 442405 Effect of Surfactant Concentration on Dissolution of Hydrodynamically Trapped Sparingly Soluble Oil Micro Droplets
Authors: Adil Mustafa, Ahmet Erten, Alper Kiraz, Melikhan Tanyeri
Abstract:
Work presented here is based on a novel experimental technique used to hydrodynamically trap oil microdroplets inside a microfluidic chip at the junction of microchannels known as stagnation point. Hydrodynamic trapping has been recently used to trap and manipulate a number of particles starting from microbeads to DNA and single cells. Benzyl Benzoate (BB) is used as droplet material. The microdroplets are trapped individually at stagnation point and their dissolution was observed. Experiments are performed for two concentrations (10mM or 10µM) of AOT surfactant (Docusate Sodium Salt) and two flow rates for each case. Moreover, experimental data is compared with Zhang-Yang-Mao (ZYM) model which studies dissolution of liquid microdroplets in the presence of a host fluid experiencing extensional creeping flow. Industrial processes like polymer blending systems in which heat or mass transport occurs experience extensional flow and an insight into these phenomena is of significant importance to many industrial processes. The experimental technique exploited here gives an insight into the dissolution of liquid microdroplets under extensional flow regime. The comparison of our experimental results with ZYM model reveals that dissolution of microdroplets at lower surfactant concentration (10µM) fits the ZYM model at saturation concentration (Cs) value reported in literature (Cs = 15×10⁻³Kg\m³) while for higher surfactant concentration (10mM) which is also above the critical micelle concentration (CMC) of surfactant (5mM) the data fits ZYM model at (Cs = 45×10⁻³Kg\m³) which is 3X times the value reported in literature. The difference in Cs value from the literature shows enhancement in dissolution rate of sparingly soluble BB microdroplets at surfactant concentrations higher than CMC. Enhancement in the dissolution of sparingly soluble materials is of great importance in pharmaceutical industry. Enhancement in the dissolution of sparingly soluble drugs is a key research area for drug design industry. The experimental method is also advantageous because it is robust and has no mechanical contact with droplets under study are freely suspended in the fluid as compared existing methods used for testing dissolution of drugs. The experiments also give an insight into CMC measurement for surfactants.Keywords: extensional flow, hydrodynamic trapping, Zhang-Yang-Mao, CMC
Procedia PDF Downloads 345404 Study of Variation of Winds Behavior on Micro Urban Environment with Use of Fuzzy Logic for Wind Power Generation: Case Study in the Cities of Arraial do Cabo and São Pedro da Aldeia, State of Rio de Janeiro, Brazil
Authors: Roberto Rosenhaim, Marcos Antonio Crus Moreira, Robson da Cunha, Gerson Gomes Cunha
Abstract:
This work provides details on the wind speed behavior within cities of Arraial do Cabo and São Pedro da Aldeia located in the Lakes Region of the State of Rio de Janeiro, Brazil. This region has one of the best potentials for wind power generation. In interurban layer, wind conditions are very complex and depend on physical geography, size and orientation of buildings and constructions around, population density, and land use. In the same context, the fundamental surface parameter that governs the production of flow turbulence in urban canyons is the surface roughness. Such factors can influence the potential for power generation from the wind within the cities. Moreover, the use of wind on a small scale is not fully utilized due to complexity of wind flow measurement inside the cities. It is difficult to accurately predict this type of resource. This study demonstrates how fuzzy logic can facilitate the assessment of the complexity of the wind potential inside the cities. It presents a decision support tool and its ability to deal with inaccurate information using linguistic variables created by the heuristic method. It relies on the already published studies about the variables that influence the wind speed in the urban environment. These variables were turned into the verbal expressions that are used in computer system, which facilitated the establishment of rules for fuzzy inference and integration with an application for smartphones used in the research. In the first part of the study, challenges of the sustainable development which are described are followed by incentive policies to the use of renewable energy in Brazil. The next chapter follows the study area characteristics and the concepts of fuzzy logic. Data were collected in field experiment by using qualitative and quantitative methods for assessment. As a result, a map of the various points is presented within the cities studied with its wind viability evaluated by a system of decision support using the method multivariate classification based on fuzzy logic.Keywords: behavior of winds, wind power, fuzzy logic, sustainable development
Procedia PDF Downloads 293403 Understanding Complexity at Pre-Construction Stage in Project Planning of Construction Projects
Authors: Mehran Barani Shikhrobat, Roger Flanagan
Abstract:
The construction planning and scheduling based on using the current tools and techniques is resulted deterministic in nature (Gantt chart, CPM) or applying a very little probability of completion (PERT) for each task. However, every project embodies assumptions and influences and should start with a complete set of clearly defined goals and constraints that remain constant throughout the duration of the project. Construction planners continue to apply the traditional methods and tools of “hard” project management that were developed for “ideal projects,” neglecting the potential influence of complexity on the design and construction process. The aim of this research is to investigate the emergence and growth of complexity in project planning and to provide a model to consider the influence of complexity on the total project duration at the post-contract award pre-construction stage of a project. The literature review showed that complexity originates from different sources of environment, technical, and workflow interactions. They can be divided into two categories of complexity factors, first, project tasks, and second, project organisation management. Project tasks may originate from performance, lack of resources, or environmental changes for a specific task. Complexity factors that relate to organisation and management refer to workflow and interdependence of different parts. The literature review highlighted the ineffectiveness of traditional tools and techniques in planning for complexity. However, this research focus on understanding the fundamental causes of the complexity of construction projects were investigated through a questionnaire with industry experts. The results were used to develop a model that considers the core complexity factors and their interactions. System dynamics were used to investigate the model to consider the influence of complexity on project planning. Feedback from experts revealed 20 major complexity factors that impact project planning. The factors are divided into five categories known as core complexity factors. To understand the weight of each factor in comparison, the Analytical Hierarchy Process (AHP) analysis method is used. The comparison showed that externalities are ranked as the biggest influence across the complexity factors. The research underlines that there are many internal and external factors that impact project activities and the project overall. This research shows the importance of considering the influence of complexity on the project master plan undertaken at the post-contract award pre-construction phase of a project.Keywords: project planning, project complexity measurement, planning uncertainty management, project risk management, strategic project scheduling
Procedia PDF Downloads 138402 Assessing the Theoretical Suitability of Sentinel-2 and Worldview-3 Data for Hydrocarbon Mapping of Spill Events, Using Hydrocarbon Spectral Slope Model
Authors: K. Tunde Olagunju, C. Scott Allen, Freek Van Der Meer
Abstract:
Identification of hydrocarbon oil in remote sensing images is often the first step in monitoring oil during spill events. Most remote sensing methods adopt techniques for hydrocarbon identification to achieve detection in order to model an appropriate cleanup program. Identification on optical sensors does not only allow for detection but also for characterization and quantification. Until recently, in optical remote sensing, quantification and characterization are only potentially possible using high-resolution laboratory and airborne imaging spectrometers (hyperspectral data). Unlike multispectral, hyperspectral data are not freely available, as this data category is mainly obtained via airborne survey at present. In this research, two (2) operational high-resolution multispectral satellites (WorldView-3 and Sentinel-2) are theoretically assessed for their suitability for hydrocarbon characterization, using the hydrocarbon spectral slope model (HYSS). This method utilized the two most persistent hydrocarbon diagnostic/absorption features at 1.73 µm and 2.30 µm for hydrocarbon mapping on multispectral data. In this research, spectra measurement of seven (7) different hydrocarbon oils (crude and refined oil) taken on ten (10) different substrates with the use of laboratory ASD Fieldspec were convolved to Sentinel-2 and WorldView-3 resolution, using their full width half maximum (FWHM) parameter. The resulting hydrocarbon slope values obtained from the studied samples enable clear qualitative discrimination of most hydrocarbons, despite the presence of different background substrates, particularly on WorldView-3. Due to close conformity of central wavelengths and narrow bandwidths to key hydrocarbon bands used in HYSS, the statistical significance for qualitative analysis on WorldView-3 sensors for all studied hydrocarbon oil returned with 95% confidence level (P-value ˂ 0.01), except for Diesel. Using multifactor analysis of variance (MANOVA), the discriminating power of HYSS is statistically significant for most hydrocarbon-substrate combinations on Sentinel-2 and WorldView-3 FWHM, revealing the potential of these two operational multispectral sensors as rapid response tools for hydrocarbon mapping. One notable exception is highly transmissive hydrocarbons on Sentinel-2 data due to the non-conformity of spectral bands with key hydrocarbon absorptions and the relatively coarse bandwidth (> 100 nm).Keywords: hydrocarbon, oil spill, remote sensing, hyperspectral, multispectral, hydrocarbon-substrate combination, Sentinel-2, WorldView-3
Procedia PDF Downloads 216401 Testing Supportive Feedback Strategies in Second/Foreign Language Vocabulary Acquisition between Typically Developing Children and Children with Learning Disabilities
Authors: Panagiota A. Kotsoni, George S. Ypsilandis
Abstract:
Learning an L2 is a demanding process for all students and in particular for those with learning disabilities (LD) who demonstrate an inability to catch up with their classmates’ progress in a given period of time. This area of study, i.e. examining children with learning disabilities in L2 has not (yet) attracted the growing interest that is registered in L1 and thus remains comparatively neglected. It is this scientific field that this study wishes to contribute to. The longitudinal purpose of this study is to locate effective Supportive Feedback Strategies (SFS) and add to the quality of learning in second language vocabulary in both typically developing (TD) and LD children. Specifically, this study aims at investigating and comparing the performance of TD with LD children on two different types of SFSs related to vocabulary short and long-term retention. In this study two different SFSs have been examined to a total of ten (10) unknown vocabulary items. Both strategies provided morphosyntactic clarifications upon new contextualized vocabulary items. The traditional SFS (direct) provided the information only in one hypertext page with a selection on the relevant item. The experimental SFS (engaging) provided the exact same split information in three successive hypertext pages in the form of a hybrid dialogue asking from the subjects to move on to the next page by selecting the relevant link. It was hypothesized that this way the subjects would engage in their own learning process by actively asking for more information which would further lead to their better retention. The participants were fifty-two (52) foreign language learners (33 TD and 19 LD) aged from 9 to 12, attending an English language school at the level of A1 (CEFR). The design of the study followed a typical pre-post-post test procedure after an hour and after a week. The results indicated statistically significant group differences with TD children performing significantly better than the LD group in both short and long-term memory measurements and in both SFSs. As regards the effectiveness of one SFS over another the initial hypothesis was not supported by the evidence as the traditional SFS was more effective compared to the experimental one in both TD and LD children. This difference proved to be statistically significant only in the long-term memory measurement and only in the TD group. It may be concluded that the human brain seems to adapt to different SFS although it shows a small preference when information is provided in a direct manner.Keywords: learning disabilities, memory, second/foreign language acquisition, supportive feedback
Procedia PDF Downloads 284400 Development of a Fire Analysis Drone for Smoke Toxicity Measurement for Fire Prediction and Management
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
This research presents the design and creation of a drone gas analyser, aimed at addressing the need for independent data collection and analysis of gas emissions during large-scale fires, particularly wasteland fires. The analyser drone, comprising a lightweight gas analysis system attached to a remote-controlled drone, enables the real-time assessment of smoke toxicity and the monitoring of gases released into the atmosphere during such incidents. The key components of the analyser unit included two gas line inlets connected to glass wool filters, a pump with regulated flow controlled by a mass flow controller, and electrochemical cells for detecting nitrogen oxides, hydrogen cyanide, and oxygen levels. Additionally, a non-dispersive infrared (NDIR) analyser is employed to monitor carbon monoxide (CO), carbon dioxide (CO₂), and hydrocarbon concentrations. Thermocouples can be attached to the analyser to monitor temperature, as well as McCaffrey probes combined with pressure transducers to monitor air velocity and wind direction. These additions allow for monitoring of the large fire and can be used for predictions of fire spread. The innovative system not only provides crucial data for assessing smoke toxicity but also contributes to fire prediction and management. The remote-controlled drone's mobility allows for safe and efficient data collection in proximity to the fire source, reducing the need for human exposure to hazardous conditions. The data obtained from the gas analyser unit facilitates informed decision-making by emergency responders, aiding in the protection of both human health and the environment. This abstract highlights the successful development of a drone gas analyser, illustrating its potential for enhancing smoke toxicity analysis and fire prediction capabilities. The integration of this technology into fire management strategies offers a promising solution for addressing the challenges associated with wildfires and other large-scale fire incidents. The project's methodology and results contribute to the growing body of knowledge in the field of environmental monitoring and safety, emphasizing the practical utility of drones for critical applications.Keywords: fire prediction, drone, smoke toxicity, analyser, fire management
Procedia PDF Downloads 89399 Verification of Dosimetric Commissioning Accuracy of Flattening Filter Free Intensity Modulated Radiation Therapy and Volumetric Modulated Therapy Delivery Using Task Group 119 Guidelines
Authors: Arunai Nambi Raj N., Kaviarasu Karunakaran, Krishnamurthy K.
Abstract:
The purpose of this study was to create American Association of Physicist in Medicine (AAPM) Task Group 119 (TG 119) benchmark plans for flattening filter free beam (FFF) deliveries of intensity modulated radiation therapy (IMRT) and volumetric arc therapy (VMAT) in the Eclipse treatment planning system. The planning data were compared with the flattening filter (FF) IMRT & VMAT plan data to verify the dosimetric commissioning accuracy of FFF deliveries. AAPM TG 119 proposed a set of test cases called multi-target, mock prostate, mock head and neck, and C-shape to ascertain the overall accuracy of IMRT planning, measurement, and analysis. We used these test cases to investigate the performance of the Eclipse Treatment planning system for the flattening filter free beam deliveries. For these test cases, we generated two sets of treatment plans, the first plan using 7–9 IMRT fields and a second plan utilizing two arc VMAT technique for both the beam deliveries (6 MV FF, 6MV FFF, 10 MV FF and 10 MV FFF). The planning objectives and dose were set as described in TG 119. The dose prescriptions for multi-target, mock prostate, mock head and neck, and C-shape were taken as 50, 75.6, 50 and 50 Gy, respectively. The point dose (mean dose to the contoured chamber volume) at the specified positions/locations was measured using compact (CC‑13) ion chamber. The composite planar dose and per-field gamma analysis were measured with IMatriXX Evaluation 2D array with OmniPro IMRT Software (version 1.7b). FFF beam deliveries of IMRT and VMAT plans were comparable to flattening filter beam deliveries. Our planning and quality assurance results matched with TG 119 data. AAPM TG 119 test cases are useful to generate FFF benchmark plans. From the obtained data in this study, we conclude that the commissioning of FFF IMRT and FFF VMAT delivery were found within the limits of TG-119 and the performance of the Eclipse treatment planning system for FFF plans were found satisfactorily.Keywords: flattening filter free beams, intensity modulated radiation therapy, task group 119, volumetric modulated arc therapy
Procedia PDF Downloads 146398 The ‘Quartered Head Technique’: A Simple, Reliable Way of Maintaining Leg Length and Offset during Total Hip Arthroplasty
Authors: M. Haruna, O. O. Onafowokan, G. Holt, K. Anderson, R. G. Middleton
Abstract:
Background: Requirements for satisfactory outcomes following total hip arthroplasty (THA) include restoration of femoral offset, version, and leg length. Various techniques have been described for restoring these biomechanical parameters, with leg length restoration being the most predominantly described. We describe a “quartered head technique” (QHT) which uses a stepwise series of femoral head osteotomies to identify and preserve the centre of rotation of the femoral head during THA in order to ensure reconstruction of leg length, offset and stem version, such that hip biomechanics are restored as near to normal as possible. This study aims to identify whether using the QHT during hip arthroplasty effectively restores leg length and femoral offset to within acceptable parameters. Methods: A retrospective review of 206 hips was carried out, leaving 124 hips in the final analysis. Power analysis indicated a minimum of 37 patients required. All operations were performed using an anterolateral approach by a single surgeon. All femoral implants were cemented, collarless, polished double taper CPT® stems (Zimmer, Swindon, UK). Both cemented, and uncemented acetabular components were used (Zimmer, Swindon, UK). Leg length, version, and offset were assessed intra-operatively and reproduced using the QHT. Post-operative leg length and femoral offset were determined and compared with the contralateral native hip, and the difference was then calculated. For the determination of leg length discrepancy (LLD), we used the method described by Williamson & Reckling, which has been shown to be reproducible with a measurement error of ±1mm. As a reference, the inferior margin of the acetabular teardrop and the most prominent point of the lesser trochanter were used. A discrepancy of less than 6mm LLD was chosen as acceptable. All peri-operative radiographs were assessed by two independent observers. Results: The mean absolute post-operative difference in leg length from the contralateral leg was +3.58mm. 84% of patients (104/124) had LLD within ±6mm of the contralateral limb. The mean absolute post-operative difference in offset from contralateral leg was +3.88mm (range -15 to +9mm, median 3mm). 90% of patients (112/124) were within ±6mm offset of the contralateral limb. There was no statistical difference noted between observer measurements. Conclusion: The QHT provides a simple, inexpensive yet effective method of maintaining femoral leg length and offset during total hip arthroplasty. Combining this technique with pre-operative templating or other techniques described may enable surgeons to reduce even further the discrepancies between pre-operative state and post-operative outcome.Keywords: leg length discrepancy, technical tip, total hip arthroplasty, operative technique
Procedia PDF Downloads 81397 Design Challenges for Severely Skewed Steel Bridges
Authors: Muna Mitchell, Akshay Parchure, Krishna Singaraju
Abstract:
There is an increasing need for medium- to long-span steel bridges with complex geometry due to site restrictions in developed areas. One of the solutions to grade separations in congested areas is to use longer spans on skewed supports that avoid at-grade obstructions limiting impacts to the foundation. Where vertical clearances are also a constraint, continuous steel girders can be used to reduce superstructure depths. Combining continuous long steel spans on severe skews can resolve the constraints at a cost. The behavior of skewed girders is challenging to analyze and design with subsequent complexity during fabrication and construction. As a part of a corridor improvement project, Walter P Moore designed two 1700-foot side-by-side bridges carrying four lanes of traffic in each direction over a railroad track. The bridges consist of prestressed concrete girder approach spans and three-span continuous steel plate girder units. The roadway design added complex geometry to the bridge with horizontal and vertical curves combined with superelevation transitions within the plate girder units. The substructure at the steel units was skewed approximately 56 degrees to satisfy the existing railroad right-of-way requirements. A horizontal point of curvature (PC) near the end of the steel units required the use flared girders and chorded slab edges. Due to the flared girder geometry, the cross-frame spacing in each bay is unique. Staggered cross frames were provided based on AASHTO LRFD and NCHRP guidelines for high skew steel bridges. Skewed steel bridges develop significant forces in the cross frames and rotation in the girder websdue to differential displacements along the girders under dead and live loads. In addition, under thermal loads, skewed steel bridges expand and contract not along the alignment parallel to the girders but along the diagonal connecting the acute corners, resulting in horizontal displacement both along and perpendicular to the girders. AASHTO LRFD recommends a 95 degree Fahrenheit temperature differential for the design of joints and bearings. The live load and the thermal loads resulted in significant horizontal forces and rotations in the bearings that necessitated the use of HLMR bearings. A unique bearing layout was selected to minimize the effect of thermal forces. The span length, width, skew, and roadway geometry at the bridges also required modular bridge joint systems (MBJS) with inverted-T bent caps to accommodate movement in the steel units. 2D and 3D finite element analysis models were developed to accurately determine the forces and rotations in the girders, cross frames, and bearings and to estimate thermal displacements at the joints. This paper covers the decision-making process for developing the framing plan, bearing configurations, joint type, and analysis models involved in the design of the high-skew three-span continuous steel plate girder bridges.Keywords: complex geometry, continuous steel plate girders, finite element structural analysis, high skew, HLMR bearings, modular joint
Procedia PDF Downloads 193396 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.Keywords: geo-referencing, ortho-rectification, video frame, self-calibration
Procedia PDF Downloads 478395 Simulation of Maximum Power Point Tracking in a Photovoltaic System: A Circumstance Using Pulse Width Modulation Analysis
Authors: Asowata Osamede
Abstract:
Optimized gain in respect to output power of stand-alone photovoltaic (PV) systems is one of the major focus of PV in recent times. This is evident to its low carbon emission and efficiency. Power failure or outage from commercial providers in general does not promote development to the public and private sector, these basically limit the development of industries. The need for a well-structured PV system is of importance for an efficient and cost-effective monitoring system. The purpose of this paper is to validate the maximum power point of an off-grid PV system taking into consideration the most effective tilt and orientation angles for PV's in the southern hemisphere. This paper is based on analyzing the system using a solar charger with MPPT from a pulse width modulation (PWM) perspective. The power conditioning device chosen is a solar charger with MPPT. The practical setup consists of a PV panel that is set to an orientation angle of 0o north, with a corresponding tilt angle of 36 o, 26o and 16o. The load employed in this set-up are three Lead Acid Batteries (LAB). The percentage fully charged, charging and not charging conditions are observed for all three batteries. The results obtained in this research is used to draw the conclusion that would provide a benchmark for researchers and scientist worldwide. This is done so as to have an idea of the best tilt and orientation angles for maximum power point in a basic off-grid PV system. A quantitative analysis would be employed in this research. Quantitative research tends to focus on measurement and proof. Inferential statistics are frequently used to generalize what is found about the study sample to the population as a whole. This would involve: selecting and defining the research question, deciding on a study type, deciding on the data collection tools, selecting the sample and its size, analyzing, interpreting and validating findings Preliminary results which include regression analysis (normal probability plot and residual plot using polynomial 6) showed the maximum power point in the system. The best tilt angle for maximum power point tracking proves that the 36o tilt angle provided the best average on time which in turns put the system into a pulse width modulation stage.Keywords: power-conversion, meteonorm, PV panels, DC-DC converters
Procedia PDF Downloads 147394 Exploring the Applicability of a Rapid Health Assessment in India
Authors: Claudia Carbajal, Jija Dutt, Smriti Pahwa, Sumukhi Vaid, Karishma Vats
Abstract:
ASER Centre, the research and assessment arm of Pratham Education Foundation sees measurement as the first stage of action. ASER uses primary research to push and give empirical foundations to policy discussions at a multitude of levels. At a household level, common citizens use a simple assessment (a floor-level test) to measure learning across rural India. This paper presents the evidence on the applicability of an ASER approach to the health sector. A citizen-led assessment was designed and executed that collected information from young mothers with children up to a year of age. The pilot assessments were rolled-out in two different models: Paid surveyors and student volunteers. The survey covered three geographic areas: 1,239 children in the Jaipur District of Rajasthan, 2,086 in the Rae Bareli District of Uttar Pradesh, and 593 children in the Bhuj Block in Gujarat. The survey tool was designed to study knowledge of health-related issues, daily practices followed by young mothers and access to relevant services and programs. It provides insights on behaviors related to infant and young child feeding practices, child and maternal nutrition and supplementation, water and sanitation, and health services. Moreover, the survey studies the reasons behind behaviors giving policy-makers actionable pathways to improve implementation of social sector programs. Although data on health outcomes are available, this approach could provide a rapid annual assessment of health issues with indicators that are easy to understand and act upon so that measurements do not become an exclusive domain of experts. The results give many insights into early childhood health behaviors and challenges. Around 98% of children are breastfed, and approximately half are not exclusively breastfed (for the first 6 months). Government established diet diversity guidelines are met for less than 1 out of 10 children. Although most households are satisfied with the quality of drinking water, most tested households had contaminated water.Keywords: citizen-led assessment, rapid health assessment, Infant and Young Children Feeding, water and sanitation, maternal nutrition, supplementation
Procedia PDF Downloads 170393 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes
Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi
Abstract:
The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees
Procedia PDF Downloads 146392 Simulation and Analysis of Mems-Based Flexible Capacitive Pressure Sensors with COMSOL
Authors: Ding Liangxiao
Abstract:
The technological advancements in Micro-Electro-Mechanical Systems (MEMS) have significantly contributed to the development of new, flexible capacitive pressure sensors,which are pivotal in transforming wearable and medical device technologies. This study employs the sophisticated simulation tools available in COMSOL Multiphysics® to develop and analyze a MEMS-based sensor with a tri-layered design. This sensor comprises top and bottom electrodes made from gold (Au), noted for their excellent conductivity, a middle dielectric layer made from a composite of Silver Nanowires (AgNWs) embedded in Thermoplastic Polyurethane (TPU), and a flexible, durable substrate of Polydimethylsiloxane (PDMS). This research was directed towards understanding how changes in the physical characteristics of the AgNWs/TPU dielectric layer—specifically, its thickness and surface area—impact the sensor's operational efficacy. We assessed several key electrical properties: capacitance, electric potential, and membrane displacement under varied pressure conditions. These investigations are crucial for enhancing the sensor's sensitivity and ensuring its adaptability across diverse applications, including health monitoring systems and dynamic user interface technologies. To ensure the reliability of our simulations, we applied the Effective Medium Theory to calculate the dielectric constant of the AgNWs/TPU composite accurately. This approach is essential for predicting how the composite material will perform under different environmental and operational stresses, thus facilitating the optimization of the sensor design for enhanced performance and longevity. Moreover, we explored the potential benefits of innovative three-dimensional structures for the dielectric layer compared to traditional flat designs. Our hypothesis was that 3D configurations might improve the stress distribution and optimize the electrical field interactions within the sensor, thereby boosting its sensitivity and accuracy. Our simulation protocol includes comprehensive performance testing under simulated environmental conditions, such as temperature fluctuations and mechanical pressures, which mirror the actual operational conditions. These tests are crucial for assessing the sensor's robustness and its ability to function reliably over extended periods, ensuring high reliability and accuracy in complex real-world environments. In our current research, although a full dynamic simulation analysis of the three-dimensional structures has not yet been conducted, preliminary explorations through three-dimensional modeling have indicated the potential for mechanical and electrical performance improvements over traditional planar designs. These initial observations emphasize the potential advantages and importance of incorporating advanced three-dimensional modeling techniques in the development of Micro-Electro-Mechanical Systems (MEMS)sensors, offering new directions for the design and functional optimization of future sensors. Overall, this study not only highlights the powerful capabilities of COMSOL Multiphysics® for modeling sophisticated electronic devices but also underscores the potential of innovative MEMS technology in advancing the development of more effective, reliable, and adaptable sensor solutions for a broad spectrum of technological applications.Keywords: MEMS, flexible sensors, COMSOL Multiphysics, AgNWs/TPU, PDMS, 3D modeling, sensor durability
Procedia PDF Downloads 45391 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost
Authors: German Osma, Gabriel Ordonez
Abstract:
The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling
Procedia PDF Downloads 170390 Crowdsensing Project in the Brazilian Municipality of Florianópolis for the Number of Visitors Measurement
Authors: Carlos Roberto De Rolt, Julio da Silva Dias, Rafael Tezza, Luca Foschini, Matteo Mura
Abstract:
The seasonal population fluctuation presents a challenge to touristic cities since the number of inhabitants can double according to the season. The aim of this work is to develop a model that correlates the waste collected with the population of the city and also allow cooperation between the inhabitants and the local government. The model allows public managers to evaluate the impact of the seasonal population fluctuation on waste generation and also to improve planning resource utilization throughout the year. The study uses data from the company that collects the garbage in Florianópolis, a Brazilian city that presents the profile of a city that attracts tourists due to numerous beaches and warm weather. The fluctuations are caused by the number of people that come to the city throughout the year for holidays, summer time vacations or business events. Crowdsensing will be accomplished through smartphones with access to an app for data collection, with voluntary participation of the population. Crowdsensing participants can access information collected in waves for this portal. Crowdsensing represents an innovative and participatory approach which involves the population in gathering information to improve the quality of life. The management of crowdsensing solutions plays an essential role given the complexity to foster collaboration, establish available sensors and collect and process the collected data. Practical implications of this tool described in this paper refer, for example, to the management of seasonal tourism in a large municipality, whose public services are impacted by the floating of the population. Crowdsensing and big data support managers in predicting the arrival, permanence, and movement of people in a given urban area. Also, by linking crowdsourced data to databases from other public service providers - e.g., water, garbage collection, electricity, public transport, telecommunications - it is possible to estimate the floating of the population of an urban area affected by seasonal tourism. This approach supports the municipality in increasing the effectiveness of resource allocation while, at the same time, increasing the quality of the service as perceived by citizens and tourists.Keywords: big data, dashboards, floating population, smart city, urban management solutions
Procedia PDF Downloads 287389 Potential of Hyperion (EO-1) Hyperspectral Remote Sensing for Detection and Mapping Mine-Iron Oxide Pollution
Authors: Abderrazak Bannari
Abstract:
Acid Mine Drainage (AMD) from mine wastes and contaminations of soils and water with metals are considered as a major environmental problem in mining areas. It is produced by interactions of water, air, and sulphidic mine wastes. This environment problem results from a series of chemical and biochemical oxidation reactions of sulfide minerals e.g. pyrite and pyrrhotite. These reactions lead to acidity as well as the dissolution of toxic and heavy metals (Fe, Mn, Cu, etc.) from tailings waste rock piles, and open pits. Soil and aquatic ecosystems could be contaminated and, consequently, human health and wildlife will be affected. Furthermore, secondary minerals, typically formed during weathering of mine waste storage areas when the concentration of soluble constituents exceeds the corresponding solubility product, are also important. The most common secondary mineral compositions are hydrous iron oxide (goethite, etc.) and hydrated iron sulfate (jarosite, etc.). The objectives of this study focus on the detection and mapping of MIOP in the soil using Hyperion EO-1 (Earth Observing - 1) hyperspectral data and constrained linear spectral mixture analysis (CLSMA) algorithm. The abandoned Kettara mine, located approximately 35 km northwest of Marrakech city (Morocco) was chosen as study area. During 44 years (from 1938 to 1981) this mine was exploited for iron oxide and iron sulphide minerals. Previous studies have shown that Kettara surrounding soils are contaminated by heavy metals (Fe, Cu, etc.) as well as by secondary minerals. To achieve our objectives, several soil samples representing different MIOP classes have been resampled and located using accurate GPS ( ≤ ± 30 cm). Then, endmembers spectra were acquired over each sample using an Analytical Spectral Device (ASD) covering the spectral domain from 350 to 2500 nm. Considering each soil sample separately, the average of forty spectra was resampled and convolved using Gaussian response profiles to match the bandwidths and the band centers of the Hyperion sensor. Moreover, the MIOP content in each sample was estimated by geochemical analyses in the laboratory, and a ground truth map was generated using simple Kriging in GIS environment for validation purposes. The acquired and used Hyperion data were corrected for a spatial shift between the VNIR and SWIR detectors, striping, dead column, noise, and gain and offset errors. Then, atmospherically corrected using the MODTRAN 4.2 radiative transfer code, and transformed to surface reflectance, corrected for sensor smile (1-3 nm shift in VNIR and SWIR), and post-processed to remove residual errors. Finally, geometric distortions and relief displacement effects were corrected using a digital elevation model. The MIOP fraction map was extracted using CLSMA considering the entire spectral range (427-2355 nm), and validated by reference to the ground truth map generated by Kriging. The obtained results show the promising potential of the proposed methodology for the detection and mapping of mine iron oxide pollution in the soil.Keywords: hyperion eo-1, hyperspectral, mine iron oxide pollution, environmental impact, unmixing
Procedia PDF Downloads 228