Search results for: spatial multi-criteria analysis model
38482 Neonatal Mortality, Infant Mortality, and Under-five Mortality Rates in the Provinces of Zimbabwe: A Geostatistical and Spatial Analysis of Public Health Policy Provisions
Authors: Jevonte Abioye, Dylan Savary
Abstract:
The aim of this research is to present a disaggregated geostatistical analysis of the subnational provincial trends of child mortality variation in Zimbabwe from a child health policy perspective. Soon after gaining independence in 1980, the government embarked on efforts towards promoting equitable health care, namely through the provision of primary health care. Government intervention programmes brought hope and promise, but achieving equity in primary health care coverage was hindered by previous existing disparities in maternal health care disproportionately concentrated in urban settings to the detriment of rural communities. The article highlights policies and programs adopted by the government during the millennium development goals period between 1990-2015 as a response to the inequities that characterised the country’s maternal health care. A longitudinal comparative method for a spatial variation on child mortality rates across provinces is developed based on geostatistical analysis. Cross-sectional and time-series data was extracted from the World Health Organisation (WHO) global health observatory data repository, demographic health survey reports, and previous academic and technical publications. Results suggest that although health care policy was uniform across provinces, not all provinces received the same antenatal and perinatal services. Accordingly, provincial rates of child mortality growth between 1994 and 2015 varied significantly. Evidence on the trends of child mortality rates and maternal health policies in Zimbabwe can be valuable for public child health policy planning and public service delivery design both in Zimbabwe and across developing countries pursuing the sustainable development agenda.Keywords: antenatal care, perinatal care, infant mortality rate, neonatal mortality rate, under-five mortality rate, millennium development goals, sustainable development agenda
Procedia PDF Downloads 20338481 Occurrence and Spatial Distribution of Pesticide Residues in Butter and Ghee (Clarified Butter Fat) in Punjab (India)
Authors: J. S. Bedi, J. P. S. Gill, R. S. Aulakh, Prabhjit Kaur
Abstract:
The present study was undertaken to monitor organochlorine, organophosphate and synthetic pyrethroid pesticide residues in butter and ghee samples collected from six different districts of Punjab. The estimation of pesticide residues was done by multiple residue analytical technique using gas chromatography equipped with GC-ECD and GC-FTD. The confirmation of residues was done on gas chromatography mass spectrometry in both SIM and Scan mode. Results indicated the presence of HCH and pp DDE as predominant contaminant in both butter and ghee even after their ban/restriction on usage in India. Residues of HCH were detected in 25.5 and 23.2 % samples of butter and ghee, respectively, while residues of pp DDE were recorded in 29.3 and 25.0 % butter and ghee samples, respectively. More importantly, the presence of endosulfan, cypermethrin, fenvalerate, deltamethrin and chlorpyrifos was observed in few butter and ghee samples indicating the serious concerns. The spatial variation of pesticide residues occurrence indicated the cotton belt of Punjab as most affected.Keywords: butter, ghee, pesticides residues, Punjab
Procedia PDF Downloads 42938480 Analysis of Underground Logistics Transportation Technology and Planning Research: Based on Xiong'an New Area, China
Authors: Xia Luo, Cheng Zeng
Abstract:
Under the promotion of the Central Committee of the Communist Party of China and the State Council in 2017, Xiong'an New Area is the third crucial new area in China established after Shenzhen and Shanghai. Its constructions' significance lies in mitigating Beijing's non-capital functions and exploring a new mode of optimizing development in densely populated and economically intensive areas. For this purpose, developing underground logistics can assume the role of goods distribution in the capital, relieve the road transport pressure in Beijing-Tianjin-Hebei Urban Agglomeration, adjust and optimize the urban layout and spatial structure of it. Firstly, the construction planning of Xiong'an New Area and underground logistics development are summarized, especially the development status abroad, the development trend, and bottlenecks of underground logistics in China. This paper explores the technicality, feasibility, and necessity of four modes of transportation. There are pneumatic capsule pipeline (PCP) technology, the CargoCap technology, cable hauled mule, and automatic guided vehicle (AGV). The above technical parameters and characteristics are introduced to relevant experts or scholars. Through establishing an indicator system, carrying out a questionnaire survey with the Delphi method, the final suggestion is obtained: China should develop logistics vehicles similar to CargoCap, adopting rail mode and driverless mode. Based on China's temporal and spatial logistics demand and the geographical pattern of Xiong'an New Area, the construction scale, technical parameters, node location, and other vital parameters of underground logistics are planned. In this way, we hope to speed up the new area's construction and the logistics industry's innovation.Keywords: the Xiong'an new area, underground logistics, contrastive analysis, CargoCap, logistics planning
Procedia PDF Downloads 12938479 Fiber Based Pushover Analysis of Reinforced Concrete Frame
Authors: Shewangizaw Tesfaye Wolde
Abstract:
The current engineering community has developed a method called performance based seismic design in which we design structures based on predefined performance levels set by the parties. Since we design our structures economically for the maximum actions expected in the life of structures they go beyond their elastic limit, in need of nonlinear analysis. In this paper conventional pushover analysis (nonlinear static analysis) is used for the performance assessment of the case study Reinforced Concrete (RC) Frame building located in Addis Ababa City, Ethiopia where proposed peak ground acceleration value by RADIUS 1999 project and others is more than twice as of EBCS-8:1995 (RADIUS 1999 project) by taking critical planar frame. Fiber beam-column model is used to control material nonlinearity with tension stiffening effect. The reliability of the fiber model and validation of software outputs are checked under verification chapter. Therefore, the aim of this paper is to propose a way for structural performance assessment of existing reinforced concrete frame buildings as well as design check.Keywords: seismic, performance, fiber model, tension stiffening, reinforced concrete
Procedia PDF Downloads 7738478 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 44638477 Spatial and Temporal Variability of Meteorological Drought Including Atmospheric Circulation in Central Europe
Authors: Andrzej Wałęga, Marta Cebulska, Agnieszka Ziernicka-Wojtaszek, Wojciech Młocek, Agnieszka Wałęga, Tommaso Caloiero
Abstract:
Drought is one of the natural phenomena influencing many aspects of human activities like food production, agriculture, industry, and the ecological conditions of the environment. In the area of the Polish Carpathians, there are periods with a deficit of rainwater and an increasing frequency in dry months, especially in the cold half of the year. The aim of this work is a spatial and temporal analysis of drought, expressed as SPI in a heterogenous area of the Polish Carpathian and of the highland Region in the Central part of Europe based on long-term precipitation data. Also, to our best knowledge, for the first time in this work, drought characteristics analyzed via the SPI were discussed based on the atmospheric circulation calendar. The study region is the Upper Vistula Basin, located in the southern and south-eastern part of Poland. In this work, monthly precipitation from 56 rainfall stations was analysed from 1961 to 2022. The 3-, 6-, 9-, and 12-month Standardized Precipitation Index (SPI) were used as indicators of meteorological drought. For the 3-month SPI, the main climatic mechanisms determining extreme droughts were defined based on the calendar of synoptic circulations. The Mann-Kendall test was used to detect the trend of extreme droughts. Statistically significant trends of SPI were observed on 52.7% of all analyzed stations, and in most cases, a positive trend was observed. Statistically significant trends were more frequently observed in stations located in the western part of the analyzed region. Long-term droughts, represented by the 12-month SPI, occurred in all stations but not in all years. Short-term droughts (3-month SPI) were most frequent in the winter season, 6 and 9-month SPI in winter and spring, and 12-month SPI in winter and autumn, respectively. The spatial distribution of drought was highly diverse. The most intensive drought occurred in 1984, with the 6-month SPI covering 98% of the analyzed region and the 9 and 12-month SPI covering 90% of the entire region. Droughts exhibit a seasonal pattern, with a dominant 10-year periodicity for all analyzed variants of SPI. Additionally, Fourier analysis revealed a 2-year periodicity for the 3-, 6-, and 9-month SPI and a 31-year periodicity for the 12-month SPI. The results provide insights into the typical climatic conditions in Poland, with strong seasonality in precipitation. The study highlighted that short-term extreme droughts, represented by the 3-month SPI, are often caused by anticyclonic situations with high-pressure wedges Ka and Wa, and anticyclonic West as observed in 52.3% of cases. These findings are crucial for understanding the spatial and temporal variability of short and long-term extreme droughts in Central Europe, particularly for the agriculture sector dominant in the northern part of the analyzed region, where drought frequency is highest.Keywords: atmospheric circulation, drought, precipitation, SPI, the Upper Vistula Basin
Procedia PDF Downloads 7438476 Lie Symmetry Treatment for Pricing Options with Transactions Costs under the Fractional Black-Scholes Model
Authors: B. F. Nteumagne, E. Pindza, E. Mare
Abstract:
We apply Lie symmetries analysis to price and hedge options in the fractional Brownian framework. The reputation of Lie groups is well spread in the area of Mathematical sciences and lately, in Finance. In the presence of transactions costs and under fractional Brownian motions, analytical solutions become difficult to obtain. Lie symmetries analysis allows us to simplify the problem and obtain new analytical solution. In this paper, we investigate the use of symmetries to reduce the partial differential equation obtained and obtain the analytical solution. We then proposed a hedging procedure and calibration technique for these types of options, and test the model on real market data. We show the robustness of our methodology by its application to the pricing of digital options.Keywords: fractional brownian model, symmetry, transaction cost, option pricing
Procedia PDF Downloads 39938475 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake
Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama
Abstract:
The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake
Procedia PDF Downloads 16438474 Analysing the Mesoscale Variations of 7Be and 210Pb Concentrations in a Complex Orography, Guadalquivir Valley, Southern Spain
Authors: M. A. Hernández-Ceballos, E. G. San Miguel, C. Galán, J. P. Bolívar
Abstract:
The evolution of 7Be and 210Pb activity concentrations in surface air along the Guadalquivir valley (southern Iberian Peninsula) is presented in this study. Samples collected for 48 h, every fifteen days, from September 2012 to November 2013 at two sampling sites (Huelva city in the mouth and Cordoba city in the middle (located 250 km far away)), are used to 1) analysing the spatial variability and 2) understanding the influence of wind conditions on 7Be and 210Pb. Similar average concentrations were registered along the valley. The mean 7Be activity concentration was 4.46 ± 0.21 mBq/m3 at Huelva and 4.33 ± 0.20 mBq/m3 at Cordoba, although registering higher maximum and minimum values at Cordoba (9.44 mBq/m3 and 1.80 mBq/m3) than at Huelva (7.95 mBq/m3 and 1.04 mBq/m3). No significant differences were observed in the 210Pb mean activity concentrations between Cordoba (0.40 ± 0.04 mBq/m3) and Huelva (0.35 ± 0.04 mBq/m3), although the maximum (1.10 mBq/m3 and 0.87 mBq/m3) and minimum (0.02 mBq/m3 and 0.04 mBq/m3) values were recorded in Cordoba. Although similar average concentrations were obtained in both sites, the temporal evolution of both natural radionuclides presents differences between them. The meteorological analysis of two sampling periods, in which large differences on 7Be and 210Pb concentrations are observed, indicates the different impact of surface and upper wind dynamics. The analysis reveals the different impact of the two sea-land breeze patterns usually observed along the valley (pure and non-pure) and the corresponding air masses at higher layers associated with each one. The pure, with short development (around 30 km inland) and increasing accumulation process, favours high concentrations of both radionuclides in Huelva (coastal site), while the non-pure, with winds sweeping the valley until arrive to Cordoba (250 km far away), causes high activity values at this site. These results reveal the impact of mesoscale conditions on these two natural radionuclides, and the importance of these circulations on its spatial and temporal variability.Keywords: 7Be, 210Pb, air masses, mesoscale process
Procedia PDF Downloads 40938473 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 9938472 Reference Model for the Implementation of an E-Commerce Solution in Peruvian SMEs in the Retail Sector
Authors: Julio Kauss, Miguel Cadillo, David Mauricio
Abstract:
E-commerce is a business model that allows companies to optimize the processes of buying, selling, transferring goods and exchanging services through computer networks or the Internet. In Peru, the electronic commerce is used infrequently. This situation is due, in part to the fact that there is no model that allows companies to implement an e-commerce solution, which means that most SMEs do not have adequate knowledge to adapt to electronic commerce. In this work, a reference model is proposed for the implementation of an e-commerce solution in Peruvian SMEs in the retail sector. It consists of five phases: Business Analysis, Business Modeling, Implementation, Post Implementation and Results. The present model was validated in a SME of the Peruvian retail sector through the implementation of an electronic commerce platform, through which the company increased its sales through the delivery channel by 10% in the first month of deployment. This result showed that the model is easy to implement, is economical and agile. In addition, it allowed the company to increase its business offer, adapt to e-commerce and improve customer loyalty.Keywords: e-commerce, retail, SMEs, reference model
Procedia PDF Downloads 32038471 Predictive Analysis of the Stock Price Market Trends with Deep Learning
Authors: Suraj Mehrotra
Abstract:
The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.Keywords: machine learning, testing set, artificial intelligence, stock analysis
Procedia PDF Downloads 9538470 Static Simulation of Pressure and Velocity Behaviour for NACA 0006 Blade Profile of Well’s Turbine
Authors: Chetan Apurav
Abstract:
In this journal the behavioural analysis of pressure and velocity has been done over the blade profile of Well’s turbine. The blade profile that has been taken into consideration is NACA 0006. The analysis has been done in Ansys Workbench under CFX module. The CAD model of the blade profile with certain dimensions has been made in CREO, and then is imported to Ansys for further analysis. The turbine model has been enclosed under a cylindrical body and has been analysed under a constant velocity of air at 5 m/s and zero relative pressure in static condition of the turbine. Further the results are represented in tabular as well as graphical form. It has been observed that the relative pressure of the blade profile has been stable throughout the radial length and hence will be suitable for practical usage.Keywords: Well's turbine, oscillating water column, ocean engineering, wave energy, NACA 0006
Procedia PDF Downloads 20238469 Characterization of Aquifer Systems and Identification of Potential Groundwater Recharge Zones Using Geospatial Data and Arc GIS in Kagandi Water Supply System Well Field
Authors: Aijuka Nicholas
Abstract:
A research study was undertaken to characterize the aquifers and identify the potential groundwater recharge zones in the Kagandi district. Quantitative characterization of hydraulic conductivities of aquifers is of fundamental importance to the study of groundwater flow and contaminant transport in aquifers. A conditional approach is used to represent the spatial variability of hydraulic conductivity. Briefly, it involves using qualitative and quantitative geologic borehole-log data to generate a three-dimensional (3D) hydraulic conductivity distribution, which is then adjusted through calibration of a 3D groundwater flow model using pumping-test data and historic hydraulic data. The approach consists of several steps. The study area was divided into five sub-watersheds on the basis of artificial drainage divides. A digital terrain model (DTM) was developed using Arc GIS to determine the general drainage pattern of Kagandi watershed. Hydrologic characterization involved the determination of the various hydraulic properties of the aquifers. Potential groundwater recharge zones were identified by integrating various thematic maps pertaining to the digital elevation model, land use, and drainage pattern in Arc GIS and Sufer golden software. The study demonstrates the potential of GIS in delineating groundwater recharge zones and that the developed methodology will be applicable to other watersheds in Uganda.Keywords: aquifers, Arc GIS, groundwater recharge, recharge zones
Procedia PDF Downloads 14738468 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna
Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov
Abstract:
This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna
Procedia PDF Downloads 28338467 Analysis of an Alternative Data Base for the Estimation of Solar Radiation
Authors: Graciela Soares Marcelli, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Claudineia Brazil, Rafael Haag
Abstract:
The sun is a source of renewable energy, and its use as both a source of heat and light is one of the most promising energy alternatives for the future. To measure the thermal or photovoltaic systems a solar irradiation database is necessary. Brazil still has a reduced number of meteorological stations that provide frequency tests, as an alternative to the radio data platform, with reanalysis systems, quite significant. ERA-Interim is a global fire reanalysis by the European Center for Medium-Range Weather Forecasts (ECMWF). The data assimilation system used for the production of ERA-Interim is based on a 2006 version of the IFS (Cy31r2). The system includes a 4-dimensional variable analysis (4D-Var) with a 12-hour analysis window. The spatial resolution of the dataset is approximately 80 km at 60 vertical levels from the surface to 0.1 hPa. This work aims to make a comparative analysis between the ERA-Interim data and the data observed in the Solarimmetric Atlas of the State of Rio Grande do Sul, to verify its applicability in the absence of an observed data network. The analysis of the results obtained for a study region as an alternative to the energy potential of a given region.Keywords: energy potential, reanalyses, renewable energy, solar radiation
Procedia PDF Downloads 16438466 Development of a One-Window Services Model for Accessing Cancer Immunotherapies
Authors: Rizwan Arshad, Alessio Panza, Nimra Inayat, Syeda Mariam Batool Kazmi, Shawana Azmat
Abstract:
The rapidly expanding use of immunotherapy for a wide range of cancers from late to early stages has, predictably, been accompanied by evidence of inequities in access to these highly effective but costly treatments. In this survey-based case study, we aimed to develop a One-window services model (OWSM) based on Anderson’s behavioral model to enhance competence in accessing cancer medications, particularly immunotherapies, through the analysis of 20 patient surveys conducted in the Armed forces bone marrow transplant center of the district, Rawalpindi from November to December 2022. The purposive sampling technique was used. Cronbach’s alpha coefficient was found to be 0.71. It was analyzed using SPSS version 26 with descriptive analysis, and results showed that the majority of the cancer patients were non-competent to access their prescribed cancer immunotherapy because of individual-level, socioeconomic, and organizational barriers.Keywords: cancer immunotherapy, one-window services model, accessibility, competence
Procedia PDF Downloads 7638465 Soil Degradati̇on Mapping Using Geographic Information System, Remote Sensing and Laboratory Analysis in the Oum Er Rbia High Basin, Middle Atlas, Morocco
Authors: Aafaf El Jazouli, Ahmed Barakat, Rida Khellouk
Abstract:
Mapping of soil degradation is derived from field observations, laboratory measurements, and remote sensing data, integrated quantitative methods to map the spatial characteristics of soil properties at different spatial and temporal scales to provide up-to-date information on the field. Since soil salinity, texture and organic matter play a vital role in assessing topsoil characteristics and soil quality, remote sensing can be considered an effective method for studying these properties. The main objective of this research is to asses soil degradation by combining remote sensing data and laboratory analysis. In order to achieve this goal, the required study of soil samples was taken at 50 locations in the upper basin of Oum Er Rbia in the Middle Atlas in Morocco. These samples were dried, sieved to 2 mm and analyzed in the laboratory. Landsat 8 OLI imagery was analyzed using physical or empirical methods to derive soil properties. In addition, remote sensing can serve as a supporting data source. Deterministic potential (Spline and Inverse Distance weighting) and probabilistic interpolation methods (ordinary kriging and universal kriging) were used to produce maps of each grain size class and soil properties using GIS software. As a result, a correlation was found between soil texture and soil organic matter content. This approach developed in ongoing research will improve the prospects for the use of remote sensing data for mapping soil degradation in arid and semi-arid environments.Keywords: Soil degradation, GIS, interpolation methods (spline, IDW, kriging), Landsat 8 OLI, Oum Er Rbia high basin
Procedia PDF Downloads 16538464 Seawater Intrusion in the Coastal Aquifer of Wadi Nador (Algeria)
Authors: Abdelkader Hachemi & Boualem Remini
Abstract:
Seawater intrusion is a significant challenge faced by coastal aquifers in the Mediterranean basin. This study aims to determine the position of the sharp interface between seawater and freshwater in the aquifer of Wadi Nador, located in the Wilaya of Tipaza, Algeria. A numerical areal sharp interface model using the finite element method is developed to investigate the spatial and temporal behavior of seawater intrusion. The aquifer is assumed to be homogeneous and isotropic. The simulation results are compared with geophysical prospection data obtained through electrical methods in 2011 to validate the model. The simulation results demonstrate a good agreement with the geophysical prospection data, confirming the accuracy of the sharp interface model. The position of the sharp interface in the aquifer is found to be approximately 1617 meters from the sea. Two scenarios are proposed to predict the interface position for the year 2024: one without pumping and the other with pumping. The results indicate a noticeable retreat of the sharp interface position in the first scenario, while a slight decline is observed in the second scenario. The findings of this study provide valuable insights into the dynamics of seawater intrusion in the Wadi Nador aquifer. The predicted changes in the sharp interface position highlight the potential impact of pumping activities on the aquifer's vulnerability to seawater intrusion. This study emphasizes the importance of implementing measures to manage and mitigate seawater intrusion in coastal aquifers. The sharp interface model developed in this research can serve as a valuable tool for assessing and monitoring the vulnerability of aquifers to seawater intrusion.Keywords: seawater, intrusion, sharp interface, Algeria
Procedia PDF Downloads 7538463 Predicting Success and Failure in Drug Development Using Text Analysis
Authors: Zhi Hao Chow, Cian Mulligan, Jack Walsh, Antonio Garzon Vico, Dimitar Krastev
Abstract:
Drug development is resource-intensive, time-consuming, and increasingly expensive with each developmental stage. The success rates of drug development are also relatively low, and the resources committed are wasted with each failed candidate. As such, a reliable method of predicting the success of drug development is in demand. The hypothesis was that some examples of failed drug candidates are pushed through developmental pipelines based on false confidence and may possess common linguistic features identifiable through sentiment analysis. Here, the concept of using text analysis to discover such features in research publications and investor reports as predictors of success was explored. R studios were used to perform text mining and lexicon-based sentiment analysis to identify affective phrases and determine their frequency in each document, then using SPSS to determine the relationship between our defined variables and the accuracy of predicting outcomes. A total of 161 publications were collected and categorised into 4 groups: (i) Cancer treatment, (ii) Neurodegenerative disease treatment, (iii) Vaccines, and (iv) Others (containing all other drugs that do not fit into the 3 categories). Text analysis was then performed on each document using 2 separate datasets (BING and AFINN) in R within the category of drugs to determine the frequency of positive or negative phrases in each document. A relative positivity and negativity value were then calculated by dividing the frequency of phrases with the word count of each document. Regression analysis was then performed with SPSS statistical software on each dataset (values from using BING or AFINN dataset during text analysis) using a random selection of 61 documents to construct a model. The remaining documents were then used to determine the predictive power of the models. Model constructed from BING predicts the outcome of drug performance in clinical trials with an overall percentage of 65.3%. AFINN model had a lower accuracy at predicting outcomes compared to the BING model at 62.5% but was not effective at predicting the failure of drugs in clinical trials. Overall, the study did not show significant efficacy of the model at predicting outcomes of drugs in development. Many improvements may need to be made to later iterations of the model to sufficiently increase the accuracy.Keywords: data analysis, drug development, sentiment analysis, text-mining
Procedia PDF Downloads 15838462 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand
Authors: Wen Liu, Errol Haarhoff, Lee Beattie
Abstract:
In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.Keywords: urban intensification, sustainable development, plan making, governance and implementation
Procedia PDF Downloads 55738461 Proactive Pure Handoff Model with SAW-TOPSIS Selection and Time Series Predict
Authors: Harold Vásquez, Cesar Hernández, Ingrid Páez
Abstract:
This paper approach cognitive radio technic and applied pure proactive handoff Model to decrease interference between PU and SU and comparing it with reactive handoff model. Through the study and analysis of multivariate models SAW and TOPSIS join to 3 dynamic prediction techniques AR, MA ,and ARMA. To evaluate the best model is taken four metrics: number failed handoff, number handoff, number predictions, and number interference. The result presented the advantages using this type of pure proactive models to predict changes in the PU according to the selected channel and reduce interference. The model showed better performance was TOPSIS-MA, although TOPSIS-AR had a higher predictive ability this was not reflected in the interference reduction.Keywords: cognitive radio, spectrum handoff, decision making, time series, wireless networks
Procedia PDF Downloads 48838460 Maximizing the Community Services of Multi-Location Public Facilities in Urban Residential Areas by the Use of Constructing the Accessibility Index and Spatial Buffer Zone
Authors: Yen-Jong Chen, Jei-An Su
Abstract:
Public use facilities provide the basic infrastructure supporting the needs of urban sustainable development. These facilities include roads (streets), parking areas, green spaces, public schools, and city parks. However, how to acquire land with the proper location and size still remains uncertain in a capitalist economy where land is largely privately owned, such as in cities in Taiwan. The issue concerning the proper acquisition of reserved land for local public facilities (RLPF) policies has been continuously debated by the Taiwanese government for more than 30 years. Lately, the government has been re-evaluating projects connected with existing RLPF policies from the viewpoints of the needs of local residents, including the living environments of older adults. This challenging task includes addressing the requests of official bureau administrators, citizens whose property rights and current use status are affected, and other stakeholders, along with the means of development. To simplify the decision to acquire or release public land, we selected only public facilities that are needed for living in the local community, including parks, green spaces, plaza squares, and land for kindergartens, schools, and local stadiums. This study categorized these spaces as the community’s “leisure public facilities” (LPF). By constructing an accessibility index of the services of such multi-function facilities, we computed and produced a GIS map of spatial buffer zones for each LPF. Through these procedures, the service needs provided by each LPF were clearly identified. We then used spatial buffer zone envelope mapping to evaluate these service areas. The results obtained can help decide which RLPF should be acquired or released so that community services can be maximized under a limited budget.Keywords: urban public facilities, community demand, accessibility, spatial buffer zone, Taiwan
Procedia PDF Downloads 8338459 OpenMP Parallelization of Three-Dimensional Magnetohydrodynamic Code FOI-PERFECT
Authors: Jiao F. Huang, Shi Chen, Shu C. Duan, Gang H. Wang
Abstract:
Due to its complex spatial structure as well as dynamic temporal evolution, an analytic solution of an X-pinch process is out of question, and numerical simulation becomes an important tool in X-pinch studies. Intrinsically, simulations of X-pinch are three-dimensional (3D) because of the specific structure of its load. Furthermore, in order to resolve both its μm-scales and ns-durations, fine spatial mesh grid and short time steps are usually adopted. The resulting large computational scales make the parallelization of codes a vital problem to be solved if any practical simulations are to be carried out. In this work, we report OpenMP parallelization of our 3D magnetohydrodynamic (MHD) code FOI-PERFECT. Results of test runs confirm that computational efficiency has been improved after parallelization, and both the sequential and parallel versions give the same physical results under the same initial conditions.Keywords: MHD simulation, OpenMP, parallelization, X-pinch
Procedia PDF Downloads 34038458 Analysis of the Decoupling Relationship between Urban Green Development and the Level of Regional Integration Based on the Tapio Model
Authors: Ruoyu Mao
Abstract:
Exploring the relationship between urban green development and regional integration level is of great significance for realising regional high quality and sustainable development. Based on the Tapio decoupling model and the theoretical framework of urban green development and regional integration, this paper builds an analysis system, makes a quantitative analysis of urban green development and regional integration level in a certain period, and discusses the relationship between the two. It also takes China's Yangtze River Delta urban agglomeration as an example to study the degree of decoupling, the type of decoupling, and the trend of the evolution of the spatio-temporal pattern of decoupling between the level of urban green development and the level of regional integration in the period of 2014-2021, with the aim of providing a useful reference for the future development of the region.Keywords: regional integration, urban green development, Tapio decoupling model, Yangtze River Delta urban agglomeration
Procedia PDF Downloads 4438457 Identification of Wiener Model Using Iterative Schemes
Authors: Vikram Saini, Lillie Dewan
Abstract:
This paper presents the iterative schemes based on Least square, Hierarchical Least Square and Stochastic Approximation Gradient method for the Identification of Wiener model with parametric structure. A gradient method is presented for the parameter estimation of wiener model with noise conditions based on the stochastic approximation. Simulation results are presented for the Wiener model structure with different static non-linear elements in the presence of colored noise to show the comparative analysis of the iterative methods. The stochastic gradient method shows improvement in the estimation performance and provides fast convergence of the parameters estimates.Keywords: hard non-linearity, least square, parameter estimation, stochastic approximation gradient, Wiener model
Procedia PDF Downloads 40538456 Landfill Failure Mobility Analysis: A Probabilistic Approach
Authors: Ali Jahanfar, Brajesh Dubey, Bahram Gharabaghi, Saber Bayat Movahed
Abstract:
Ever increasing population growth of major urban centers and environmental challenges in siting new landfills have resulted in a growing trend in design of mega-landfills some with extraordinary heights and dangerously steep slopes. Landfill failure mobility risk analysis is one of the most uncertain types of dynamic rheology models due to very large inherent variabilities in the heterogeneous solid waste material shear strength properties. The waste flow of three historic dumpsite and two landfill failures were back-analyzed using run-out modeling with DAN-W model. The travel distances of the waste flow during landfill failures were calculated approach by taking into account variability in material shear strength properties. The probability distribution function for shear strength properties of the waste material were grouped into four major classed based on waste material compaction (landfills versus dumpsites) and composition (high versus low quantity) of high shear strength waste materials such as wood, metal, plastic, paper and cardboard in the waste. This paper presents a probabilistic method for estimation of the spatial extent of waste avalanches, after a potential landfill failure, to create maps of vulnerability scores to inform property owners and residents of the level of the risk.Keywords: landfill failure, waste flow, Voellmy rheology, friction coefficient, waste compaction and type
Procedia PDF Downloads 29138455 Suggestion of Reasonable Analysis Model for T-Girder Modular Bridge
Authors: Soonwon Kang, Jinwoong Choi, Sungnam Hong, Seung-Kyung Kye, Sun-Kyu Park
Abstract:
The modular bridge is to be constructed by assembling standardized precast segments. This bridge is classified as a slab type and T-girder type. The T-girder bridge has transverse joint. However, it did not perform the verification on the transverse joint, but the slab type was done on the analytic study on the joint. Therefore, it is necessary for precast modular T-girder bridge that has a transverse joint to propose an appropriated model. In this study, specimens and analysis models compared integrated type with segmented type. Results of the integrated and segmented specimens, each of the deflection was 98.40mm and 74.66mm when the maximum load was 269.71kN and 248.29kN, in case of the modeling the specimens, each of the deflection was 84.04mm, 69.39mm when the maximum load was 269.71kN, 248.29kN, therefore, the precast T-girder modular bridges form the analytic model proposed appropriate.Keywords: precast, T-girder modular bridge, finite element analysis, joint
Procedia PDF Downloads 41638454 Geomechanical Technologies for Assessing Three-Dimensional Stability of Underground Excavations Utilizing Remote-Sensing, Finite Element Analysis, and Scientific Visualization
Authors: Kwang Chun, John Kemeny
Abstract:
Light detection and ranging (LiDAR) has been a prevalent remote-sensing technology applied in the geological fields due to its high precision and ease of use. One of the major applications is to use the detailed geometrical information of underground structures as a basis for the generation of a three-dimensional numerical model that can be used in a geotechnical stability analysis such as FEM or DEM. To date, however, straightforward techniques in reconstructing the numerical model from the scanned data of the underground structures have not been well established or tested. In this paper, we propose a comprehensive approach integrating all the various processes, from LiDAR scanning to finite element numerical analysis. The study focuses on converting LiDAR 3D point clouds of geologic structures containing complex surface geometries into a finite element model. This methodology has been applied to Kartchner Caverns in Arizona, where detailed underground and surface point clouds can be used for the analysis of underground stability. Numerical simulations were performed using the finite element code Abaqus and presented by 3D computing visualization solution, ParaView. The results are useful in studying the stability of all types of underground excavations including underground mining and tunneling.Keywords: finite element analysis, LiDAR, remote-sensing, scientific visualization, underground stability
Procedia PDF Downloads 17538453 Intercultural Urbanism: Interpreting Cultural Inclusion in Traditional Precincts of Contemporary Cities: A Case of Mattancherry
Authors: Amrutha Jayan
Abstract:
The cities are attractors of the human population, offering opportunities for economic activities for different linguistic, cultural, and ethnic groups. The urban form and design of the city impact the life of these people. Social and cultural exclusions result in spatial segregation and gentrification. The spaces provided in cities must be inclusive for all these communities for them to feel part of the city and contribute to society. Intercultural urbanism is a theory and practice of city building, planning, and design of urban spaces and architectures that are cognizant of the social impact of the built environment. The postulate acknowledges cultural differences and opportunities for cultural exchange. Literature on intercultural urbanism, culture and space, spatial justice, and cultural inclusion are analyzed to identify parameters contributing to intercultural placemaking. A qualitative study on Mattancherry shows how the precinct has sustained throughout the years with different communities living together within a radius of 5 km, creating a diverse and vibrant environment. The research identifies the urban elements that contribute to intercultural interactions and maintain the synergy between these communities. The public spaces, porous edges, built-form, streets, and accessibility contribute to chance encounters and intercultural interactivity. The research seeks to find the factors that contribute to intercultural placemaking.Keywords: intercultural urbanism, cultural inclusion, spatial justice, public space
Procedia PDF Downloads 220