Search results for: regression models drone
7063 Dietary Vitamin D Intake and the Bladder Cancer Risk: A Pooled Analysis of Prospective Cohort Studies
Authors: Iris W. A. Boot, Anke Wesselius, Maurice P. Zeegers
Abstract:
Diet may play an essential role in the aetiology of bladder cancer (BC). Vitamin D is involved in various biological functions which have the potential to prevent BC development. Besides, vitamin D also influences the uptake of calcium and phosphorus , thereby possibly indirectly influencing the risk of BC. The aim of the present study was to investigate the relation between vitamin D intake and BC risk. Individual dietary data were pooled from three cohort studies. Food item intake was converted to daily intakes of vitamin D, calcium and phosphorus. Pooled multivariate hazard ratios (HRs), with corresponding 95% confidence intervals (CIs) were obtained using Cox-regression models. Analyses were adjusted for gender, age and smoking status (Model 1), and additionally for the food groups fruit, vegetables and meat (Model 2). Dose–response relationships (Model 1) were examined using a nonparametric test for trend. In total, 2,871 cases and 522,364 non-cases were included in the analyses. The present study showed an overall increased BC risk for high dietary vitamin D intake (HR: 1.14, 95% CI: 1.03-1.26). A similar increase BC risk with high vitamin D intake was observed among women and for the non-muscle invasive BC subtype, (HR: 1.41, 95% CI: 1.15-1.72, HR: 1.13, 95% CI: 1.01-1.27, respectively). High calcium intake decreased the BC risk among women (HR: 0.81, 95% CI: 0.67-0.97). A combined inverse effect on BC risk was observed for low vitamin D intake and high calcium intake (HR: 0.67, 95% CI: 0.48-0.93), while a positive effect was observed for high vitamin D intake in combination with low, moderate and high phosphorus (HR: 1.31, 95% CI: 1.09-1.59, HR: 1.17, 95% CI: 1.01-1.36, HR: 1.16, 95% CI: 1.03-1.31, respectively). Combining all nutrients showed a decreased BC risk for low vitamin D intake, high calcium and moderate phosphor intake (HR: 0.37, 95% CI: 0.18-0.75), and an increased BC risk for moderate intake of all the nutrients (HR: 1.18, 95% CI: 1.02-1.38), for high vitamin D and low calcium and phosphor intake (HR: 1.28, 95% CI: 1.01-1.62), and for moderate vitamin D and calcium and high phosphorus intake (HR: 1.27, 95% CI: 1.01-1.59). No significant dose-response analyses were observed. The findings of this study show an increased BC risk for high dietary vitamin D intake and a decreased risk for high calcium intake. Besides, the study highlights the importance of examining the effect of a nutrient in combination with complementary nutrients for risk assessment. Future research should focus on nutrients in a wider context and in nutritional patterns.Keywords: bladder cancer, nutritional oncology, pooled cohort analysis, vitamin D
Procedia PDF Downloads 847062 Modeling and Optimization of Performance of Four Stroke Spark Ignition Injector Engine
Authors: A. A. Okafor, C. H. Achebe, J. L. Chukwuneke, C. G. Ozoegwu
Abstract:
The performance of an engine whose basic design parameters are known can be predicted with the assistance of simulation programs into the less time, cost and near value of actual. This paper presents a comprehensive mathematical model of the performance parameters of four stroke spark ignition engine. The essence of this research work is to develop a mathematical model for the analysis of engine performance parameters of four stroke spark ignition engine before embarking on full scale construction, this will ensure that only optimal parameters are in the design and development of an engine and also allow to check and develop the design of the engine and it’s operation alternatives in an inexpensive way and less time, instead of using experimental method which requires costly research test beds. To achieve this, equations were derived which describe the performance parameters (sfc, thermal efficiency, mep and A/F). The equations were used to simulate and optimize the engine performance of the model for various engine speeds. The optimal values obtained for the developed bivariate mathematical models are: sfc is 0.2833kg/kwh, efficiency is 28.77% and a/f is 20.75.Keywords: bivariate models, engine performance, injector engine, optimization, performance parameters, simulation, spark ignition
Procedia PDF Downloads 3267061 Comparison of Feedforward Back Propagation and Self-Organizing Map for Prediction of Crop Water Stress Index of Rice
Authors: Aschalew Cherie Workneh, K. S. Hari Prasad, Chandra Shekhar Prasad Ojha
Abstract:
Due to the increase in water scarcity, the crop water stress index (CWSI) is receiving significant attention these days, especially in arid and semiarid regions, for quantifying water stress and effective irrigation scheduling. Nowadays, machine learning techniques such as neural networks are being widely used to determine CWSI. In the present study, the performance of two artificial neural networks, namely, Self-Organizing Maps (SOM) and Feed Forward-Back Propagation Artificial Neural Networks (FF-BP-ANN), are compared while determining the CWSI of rice crop. Irrigation field experiments with varying degrees of irrigation were conducted at the irrigation field laboratory of the Indian Institute of Technology, Roorkee, during the growing season of the rice crop. The CWSI of rice was computed empirically by measuring key meteorological variables (relative humidity, air temperature, wind speed, and canopy temperature) and crop parameters (crop height and root depth). The empirically computed CWSI was compared with SOM and FF-BP-ANN predicted CWSI. The upper and lower CWSI baselines are computed using multiple regression analysis. The regression analysis showed that the lower CWSI baseline for rice is a function of crop height (h), air vapor pressure deficit (AVPD), and wind speed (u), whereas the upper CWSI baseline is a function of crop height (h) and wind speed (u). The performance of SOM and FF-BP-ANN were compared by computing Nash-Sutcliffe efficiency (NSE), index of agreement (d), root mean squared error (RMSE), and coefficient of correlation (R²). It is found that FF-BP-ANN performs better than SOM while predicting the CWSI of rice crops.Keywords: artificial neural networks; crop water stress index; canopy temperature, prediction capability
Procedia PDF Downloads 1177060 Giving Right-of-Way to Emergency Ambulances: Attitude and Behavior of Road Users in Developing Countries
Authors: Mahmoud T. Alwidyan, Ahmad Alrawashdeh, Alaa O. Oteir
Abstract:
Background: Emergency medical service (EMS) providers, oftentimes, use the lights and sirens (L&S) of their ambulances to warn road users, navigate through traffic, and expedite transport to save lives of ill and injured patients. Despite the contribution of road users in the effectiveness of reducing transport time of EMS ambulances using L&S, there is a lack of empirical assessments exploring the road user’s attitude and behavior in such situations. This study, therefore, aimed to assess the attitude and behavior of road users in response to EMS ambulances with warning L&S in use. Methods: This was a cross-sectional survey developed and distributed to adult road users in Northern Jordan. The questionnaire included 20 items addressing demographics, attitudes, and behavior toward emergency ambulances. We described the participants’ responses and assessed the association between demographics and attitude statements using logistic regression. Results: A total of 1302 questionnaires were complete and appropriate for analysis. The mean age was 34.2 (SD± 11.4) years, and the majority were males (72.6%). About half of road users (47.9%) in our sample would perform inappropriate action in response to EMS ambulances with L&S in use. The multivariate logistic regression model show that being female (OR, 0.63; 95% CI = 0.48-0.81), more educated (OR, 0.68; 95% CI = 0.53-0.86), or public transport driver (OR, 0.55; 95% CI = 0.34-0.90) is significantly associated with inappropriate response to EMS ambulances. Additionally, a significant proportion of road users may perform inappropriate and lawless driving practices such as crossing red traffic lights or following the passing by EMS ambulances, which would, in turn, increase the risk on ambulances and other road users. Conclusions: A large proportion of road users in Jordan may respond inappropriately to the EMS ambulances, and many engage in risky driving behaviors due perhaps to the lack of procedural knowledge. Policy-related interventions and educational programs are crucially needed to increase public awareness of the traffic law concerning EMS ambulances and to enhance appropriate driving behavior, which, in turn, improves the efficiency of ambulance services.Keywords: EMS ambulances, lights and sirens, road users, attitude and behavior
Procedia PDF Downloads 887059 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams
Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin
Abstract:
Fire incidents have been steadily increased over the last year according to national emergency management agency of South Korea. Even though most of the fire incidents with property damage have been occurred in building, rehabilitation has not been properly done with consideration of structure safety. Therefore, this study aims at evaluating rehabilitation effects on fire damaged normal strength concrete beams through experiments and finite element analyses. For the experiments, reinforced concrete beams were fabricated having designed concrete strength of 21 MPa. Two different cover thicknesses were used as 40 mm and 50 mm. After cured, the fabricated beams were heated for 1hour or 2hours according to ISO-834 standard time-temperature curve. Rehabilitation was done by removing the damaged part of cover thickness and filling polymeric mortar into the removed part. Both fire damaged beams and rehabilitated beams were tested with four point loading system to observe structural behaviors and the rehabilitation effect. To verify the experiment, finite element (FE) models for structural analysis were generated using commercial software ABAQUS 6.10-3. For the rehabilitated beam models, integrated temperature-structural analyses were performed in advance to obtain geometries of the fire damaged beams. In addition to the fire damaged beam models, rehabilitated part was added with material properties of polymeric mortar. Three dimensional continuum brick elements were used for both temperature and structural analyses. The same loading and boundary conditions as experiments were implemented to the rehabilitated beam models and non-linear geometrical analyses were performed. Test results showed that maximum loads of the rehabilitated beams were 8~10% higher than those of the non-rehabilitated beams and even 1~6 % higher than those of the non-fire damaged beam. Stiffness of the rehabilitated beams were also larger than that of non-rehabilitated beams but smaller than that of the non-fire damaged beams. In addition, predicted structural behaviors from the analyses also showed good rehabilitation effect and the predicted load-deflection curves were similar to the experimental results. From this study, both experiments and analytical results demonstrated good rehabilitation effect on the fire damaged normal strength concrete beams. For the further, the proposed analytical method can be used to predict structural behaviors of rehabilitated and fire damaged concrete beams accurately without suffering from time and cost consuming experimental process.Keywords: fire, normal strength concrete, rehabilitation, reinforced concrete beam
Procedia PDF Downloads 5087058 Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu
Authors: Ammarah Irum, Muhammad Ali Tahir
Abstract:
Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively.Keywords: urdu sentiment analysis, deep learning, natural language processing, opinion mining, low-resource language
Procedia PDF Downloads 727057 Distributed Manufacturing (DM)- Smart Units and Collaborative Processes
Authors: Hermann Kuehnle
Abstract:
Developments in ICT totally reshape manufacturing as machines, objects and equipment on the shop floors will be smart and online. Interactions with virtualizations and models of a manufacturing unit will appear exactly as interactions with the unit itself. These virtualizations may be driven by providers with novel ICT services on demand that might jeopardize even well established business models. Context aware equipment, autonomous orders, scalable machine capacity or networkable manufacturing unit will be the terminology to get familiar with in manufacturing and manufacturing management. Such newly appearing smart abilities with impact on network behavior, collaboration procedures and human resource development will make distributed manufacturing a preferred model to produce. Computing miniaturization and smart devices revolutionize manufacturing set ups, as virtualizations and atomization of resources unwrap novel manufacturing principles. Processes and resources obey novel specific laws and have strategic impact on manufacturing and major operational implications. Mechanisms from distributed manufacturing engaging interacting smart manufacturing units and decentralized planning and decision procedures already demonstrate important effects from this shift of focus towards collaboration and interoperability.Keywords: autonomous unit, networkability, smart manufacturing unit, virtualization
Procedia PDF Downloads 5267056 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour
Authors: Hayder H. Kareem, Shunqi Pan
Abstract:
The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW
Procedia PDF Downloads 1837055 Exploring the Relationships between Cyberbullying Perceptions and Facebook Attitudes of Turkish Students
Authors: Yavuz Erdoğan, Hidayet Çiftçi
Abstract:
Cyberbullying, a phenomenon among adolescents, is defined as actions that use information and communication technologies such as social media to support deliberate, repeated, and hostile behaviour by an individual or group. With the advancement in communication and information technology, cyberbullying has expanded its boundaries among students in schools. Thus, parents, psychologists, educators, and lawmakers must become aware of the potential risks of this phenomenon. In the light of these perspectives, this study aims to investigate the relationships between cyberbullying perception and Facebook attitudes of Turkish students. A survey method was used for the study and the data were collected by “Cyberbullying Perception Scale”, “Facebook Attitude Scale” and “Personal Information Form”. For this purpose, study has been conducted during 2014-2015 academic year, with a total of 748 students with 493 male (%65.9) and 255 female (%34.1) from randomly selected high schools. In the analysis of data Pearson correlation and multiple regression analysis, multivariate analysis of variance (MANOVA) and Scheffe post hoc test has been used. At the end of the study, the results displayed a negative correlation between Turkish students’ Facebook attitudes and cyberbullying perception (r=-.210; p<0.05). In order to identify the predictors of students’ cyberbullying perception, multiple regression analysis was used. As a result, significant relations were detected between cyberbullying perception and independent variables (F=5.102; p<0.05). Independent variables together explain 11.0% of the total variance in cyberbullying scores. The variables that significantly predict the students’ cyberbullying perception are Facebook attitudes (t=-5.875; p<0.05), and gender (t=3.035; p<0.05). In order to calculate the effects of independent variables on students’ Facebook attitudes and cyberbullying perception MANOVA was conducted. The results of the MANOVA indicate that the Facebook attitudes and cyberbullying perception were significantly differed according to students’ gender, age, educational attainment of the mother, educational attainment of the father, income of the family and daily usage of internet.Keywords: facebook, cyberbullying, attitude, internet usage
Procedia PDF Downloads 4027054 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: gendered grammar, misogynistic language, natural language processing, neural networks
Procedia PDF Downloads 1207053 Longitudinal Study of the Phenomenon of Acting White in Hungarian Elementary Schools Analysed by Fixed and Random Effects Models
Authors: Lilla Dorina Habsz, Marta Rado
Abstract:
Popularity is affected by a variety of factors in the primary school such as academic achievement and ethnicity. The main goal of our study was to analyse whether acting white exists in Hungarian elementary schools. In other words, we observed whether Roma students penalize those in-group members who obtain the high academic achievement. Furthermore, to show how popularity is influenced by changes in academic achievement in inter-ethnic relations. The empirical basis of our research was the 'competition and negative networks' longitudinal dataset, which was collected by the MTA TK 'Lendület' RECENS research group. This research followed 11 and 12-year old students for a two-year period. The survey was analysed using fixed and random effect models. Overall, we found a positive correlation between grades and popularity, but no evidence for the acting white effect. However, better grades were more positively evaluated within the majority group than within the minority group, which may further increase inequalities.Keywords: academic achievement, elementary school, ethnicity, popularity
Procedia PDF Downloads 2007052 Robots for City Life: Design Guidelines and Strategy Recommendations for Introducing Robots in Cities
Authors: Akshay Rege, Lara Gomaa, Maneesh Kumar Verma, Sem Carree
Abstract:
The aim of this paper is to articulate design strategies and recommendations for introducing robots into the city life of people based on experiments conducted with robots and semi-autonomous systems in three cities in the Netherlands. This research was carried out by the Spot robotics team of Impact Lab housed within YES!Delft, a start-up accelerator located in Delft, The Netherlands. The premise of this research is to inform the development of the ‘region of the future’ by the Municipality of Rotterdam-Den Haag (MRDH). The paper starts by reporting the desktop research carried out to find and develop multiple use cases for robots to support humans in various activities. Further, the paper reports the user research carried out by crowdsourcing responses collected in public spaces of Rotterdam-Den Haag region and on the internet. Furthermore, based on the knowledge gathered in the initial research, practical experiments were carried out using robots and semi-autonomous systems in order to test and validate our initial research. These experiments were conducted in three cities in the Netherlands which were Rotterdam, The Hague, and Delft. Custom sensor box, Drone, and Boston Dynamics' Spot robot were used to conduct these experiments. Out of thirty use cases, five were tested with experiments which were skyscraper emergency evacuation, human transportation and security, bike lane delivery, mobility tracking, and robot drama. The learnings from these experiments provided us with insights into human-robot interaction and symbiosis in cities which can be used to introduce robots in cities to support human activities, ultimately enabling the transitioning from a human only city life towards a blended one where robots can play a role. Based on these understandings, we formulated design guidelines and strategy recommendations for incorporating robots in the Rotterdam-Den Haag’s region of the future. Lastly, we discuss how our insights in the Rotterdam-Den Haag region can inspire and inform the incorporation of robots in different cities of the world.Keywords: city life, design guidelines, human-robot Interaction, robot use cases, robotic experiments, strategy recommendations, user research
Procedia PDF Downloads 977051 Gender Differences in Morphological Predictors of Running Ability: A Comprehensive Analysis of Male and Female Athletes in Cape Coast Metropolis, Ghana
Authors: Stephen Anim, Emmanuel O. Sarpong, Daniel Apaak
Abstract:
This study investigates the relationship between morphological predictors and running ability, emphasizing gender-specific variations among male and female athletes in Cape Coast Metropolis (CCM), Ghana. The dynamic interplay between an athlete's physique and their performance capabilities holds particular relevance in the realm of sports science, influencing training methodologies and talent identification processes. The research aims to contribute comprehensive insights into the morphological determinants of running proficiency, with a specific focus on the local athletic community in Cape Coast Metropolis. Utilizing a correlational research design, a thorough analysis of morphological features, encompassing 22 morphological features including body weight, 6 measurements related to body length, 7 body girth, and knee diameter, and 7 skinfold measurements against 50m dash, among male and female athletes, was conducted. The study involved 420 athletes both male (N=210) and female (N=210) aged 16-22 from 10 Senior High Schools (SHS) in the Cape Coast Metropolis, providing a representative sample of the local athletic community. The collected data were statistically analysed using means and standard deviation, and stepwise multiple regression to determine how morphological variables contribute to and predict running proficiency outcomes. The investigation revealed that athletes from Senior High Schools (SHS) in Cape Coast Metropolis (CCM) exhibit well-developed physiques and sufficient fitness levels suitable for overall athletic performance, taking into account gender differences. Moreover, the findings suggested that approximately 77% of running ability could be attributed to morphological factors, leading to diverse predictive models for male and female athletes within SHS in CCM, Ghana. Consequently, these formulated equations hold promise for predicting running ability among young athletes, particularly in the context of SHS environments.Keywords: body fat, body girth, body length, morphological features, running ability, senior high school
Procedia PDF Downloads 677050 Sediment Transport Monitoring in the Port of Veracruz Expansion Project
Authors: Francisco Liaño-Carrera, José Isaac Ramírez-Macías, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga, Marcos Rangel-Avalos, Adriana Andrea Roldán-Ubando
Abstract:
The construction of most coastal infrastructure developments around the world are usually made considering wave height, current velocities and river discharges; however, little effort has been paid to surveying sediment transport during dredging or the modification to currents outside the ports or marinas during and after the construction. This study shows a complete survey during the construction of one of the largest ports of the Gulf of Mexico. An anchored Acoustic Doppler Current Velocity profiler (ADCP), a towed ADCP and a combination of model outputs were used at the Veracruz port construction in order to describe the hourly sediment transport and current modifications in and out of the new port. Owing to the stability of the system the new port was construction inside Vergara Bay, a low wave energy system with a tidal range of up to 0.40 m. The results show a two-current system pattern within the bay. The north side of the bay has an anticyclonic gyre, while the southern part of the bay shows a cyclonic gyre. Sediment transport trajectories were made every hour using the anchored ADCP, a numerical model and the weekly data obtained from the towed ADCP within the entire bay. The sediment transport trajectories were carefully tracked since the bay is surrounded by coral reef structures which are sensitive to sedimentation rate and water turbidity. The survey shows that during dredging and rock input used to build the wave breaker sediments were locally added (< 2500 m2) and local currents disperse it in less than 4 h. While the river input located in the middle of the bay and the sewer system plant may add more than 10 times this amount during a rainy day or during the tourist season. Finally, the coastal line obtained seasonally with a drone suggests that the southern part of the bay has not been modified by the construction of the new port located in the northern part of the bay, owing to the two subsystem division of the bay.Keywords: Acoustic Doppler Current Profiler, construction around coral reefs, dredging, port construction, sediment transport monitoring,
Procedia PDF Downloads 2277049 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model
Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle
Abstract:
In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model
Procedia PDF Downloads 1037048 Investigating Data Normalization Techniques in Swarm Intelligence Forecasting for Energy Commodity Spot Price
Authors: Yuhanis Yusof, Zuriani Mustaffa, Siti Sakira Kamaruddin
Abstract:
Data mining is a fundamental technique in identifying patterns from large data sets. The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical. Prior to that, data are consolidated so that the resulting mining process may be more efficient. This study investigates the effect of different data normalization techniques, which are Min-max, Z-score, and decimal scaling, on Swarm-based forecasting models. Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC). Forecasting models are later developed to predict the daily spot price of crude oil and gasoline. Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max. Nevertheless, the GWO is more superior that ABC as its model generates the highest accuracy for both crude oil and gasoline price. Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms.Keywords: artificial bee colony, data normalization, forecasting, Grey Wolf optimizer
Procedia PDF Downloads 4767047 Diagnostics and Explanation of the Current Status of the 40- Year Railway Viaduct
Authors: Jakub Zembrzuski, Bartosz Sobczyk, Mikołaj MIśkiewicz
Abstract:
Besides designing new constructions, engineers all over the world must face another problem – maintenance, repairs, and assessment of the technical condition of existing bridges. To solve more complex issues, it is necessary to be familiar with the theory of finite element method and to have access to the software that provides sufficient tools which to enable create of sometimes significantly advanced numerical models. The paper includes a brief assessment of the technical condition, a description of the in situ non-destructive testing carried out and the FEM models created for global and local analysis. In situ testing was performed using strain gauges and displacement sensors. Numerical models were created using various software and numerical modeling techniques. Particularly noteworthy is the method of modeling riveted joints of the crossbeam of the viaduct. It is a simplified method that consists of the use of only basic numerical tools such as beam and shell finite elements, constraints, and simplified boundary conditions (fixed support and symmetry). The results of the numerical analyses were presented and discussed. It is clearly explained why the structure did not fail, despite the fact that the weld of the deck plate completely failed. A further research problem that was solved was to determine the cause of the rapid increase in values on the stress diagram in the cross-section of the transverse section. The problems were solved using the solely mentioned, simplified method of modeling riveted joints, which demonstrates that it is possible to solve such problems without access to sophisticated software that enables to performance of the advanced nonlinear analysis. Moreover, the obtained results are of great importance in the field of assessing the operation of bridge structures with an orthotropic plate.Keywords: bridge, diagnostics, FEM simulations, failure, NDT, in situ testing
Procedia PDF Downloads 737046 Age Estimation from Teeth among North Indian Population: Comparison and Reliability of Qualitative and Quantitative Methods
Authors: Jasbir Arora, Indu Talwar, Daisy Sahni, Vidya Rattan
Abstract:
Introduction: Age estimation is a crucial step to build the identity of a person, both in case of deceased and alive. In adults, age can be estimated on the basis of six regressive (Attrition, Secondary dentine, Dentine transparency, Root resorption, Cementum apposition and Periodontal Disease) changes in teeth qualitatively using scoring system and quantitatively by micrometric method. The present research was designed to establish the reliability of qualitative (method 1) and quantitative (method 2) of age estimation among North Indians and to compare the efficacy of these two methods. Method: 250 single-rooted extracted teeth (18-75 yrs.) were collected from Department of Oral Health Sciences, PGIMER, Chandigarh. Before extraction, periodontal score of each tooth was noted. Labiolingual sections were prepared and examined under light microscope for regressive changes. Each parameter was scored using Gustafson’s 0-3 point score system (qualitative), and total score was calculated. For quantitative method, each regressive change was measured quantitatively in form of 18 micrometric parameters under microscope with the help of measuring eyepiece. Age was estimated using linear and multiple regression analysis in Gustafson’s method and Kedici’s method respectively. Estimated age was compared with actual age on the basis of absolute mean error. Results: In pooled data, by Gustafson’s method, significant correlation (r= 0.8) was observed between total score and actual age. Total score generated an absolute mean error of ±7.8 years. Whereas, for Kedici’s method, a value of correlation coefficient of r=0.5 (p<0.01) was observed between all the eighteen micrometric parameters and known age. Using multiple regression equation, age was estimated, and an absolute mean error of age was found to be ±12.18 years. Conclusion: Gustafson’s (qualitative) method was found to be a better predictor for age estimation among North Indians.Keywords: forensic odontology, age estimation, North India, teeth
Procedia PDF Downloads 2427045 Factors Affecting M-Government Deployment and Adoption
Authors: Saif Obaid Alkaabi, Nabil Ayad
Abstract:
Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.Keywords: e-government, m-government, system dependability, system security, trust
Procedia PDF Downloads 3817044 The Future of Insurance: P2P Innovation versus Traditional Business Model
Authors: Ivan Sosa Gomez
Abstract:
Digitalization has impacted the entire insurance value chain, and the growing movement towards P2P platforms and the collaborative economy is also beginning to have a significant impact. P2P insurance is defined as innovation, enabling policyholders to pool their capital, self-organize, and self-manage their own insurance. In this context, new InsurTech start-ups are emerging as peer-to-peer (P2P) providers, based on a model that differs from traditional insurance. As a result, although P2P platforms do not change the fundamental basis of insurance, they do enable potentially more efficient business models to be established in terms of ensuring the coverage of risk. It is therefore relevant to determine whether p2p innovation can have substantial effects on the future of the insurance sector. For this purpose, it is considered necessary to develop P2P innovation from a business perspective, as well as to build a comparison between a traditional model and a P2P model from an actuarial perspective. Objectives: The objectives are (1) to represent P2P innovation in the business model compared to the traditional insurance model and (2) to establish a comparison between a traditional model and a P2P model from an actuarial perspective. Methodology: The research design is defined as action research in terms of understanding and solving the problems of a collectivity linked to an environment, applying theory and best practices according to the approach. For this purpose, the study is carried out through the participatory variant, which involves the collaboration of the participants, given that in this design, participants are considered experts. For this purpose, prolonged immersion in the field is carried out as the main instrument for data collection. Finally, an actuarial model is developed relating to the calculation of premiums that allows for the establishment of projections of future scenarios and the generation of conclusions between the two models. Main Contributions: From an actuarial and business perspective, we aim to contribute by developing a comparison of the two models in the coverage of risk in order to determine whether P2P innovation can have substantial effects on the future of the insurance sector.Keywords: Insurtech, innovation, business model, P2P, insurance
Procedia PDF Downloads 927043 Healthcare Providers’ Perception Towards Utilization of Health Information Applications and Its Associated Factors in Healthcare Delivery in Health Facilities in Cape Coast Metropolis, Ghana
Authors: Richard Okyere Boadu, Godwin Adzakpah, Nathan Kumasenu Mensah, Kwame Adu Okyere Boadu, Jonathan Kissi, Christiana Dziyaba, Rosemary Bermaa Abrefa
Abstract:
Information and communication technology (ICT) has significantly advanced global healthcare, with electronic health (e-Health) applications improving health records and delivery. These innovations, including electronic health records, strengthen healthcare systems. The study investigates healthcare professionals' perceptions of health information applications and their associated factors in the Cape Coast Metropolis of Ghana's health facilities. Methods: We used a descriptive cross-sectional study design to collect data from 632 healthcare professionals (HCPs), in the three purposively selected health facilities in the Cape Coast municipality of Ghana in July 2022. Shapiro-Wilk test was used to check the normality of dependent variables. Descriptive statistics were used to report means with corresponding standard deviations for continuous variables. Proportions were also reported for categorical variables. Bivariate regression analysis was conducted to determine the factors influencing the Benefits of Information Technology (BoIT); Barriers to Information Technology Use (BITU); and Motives of Information Technology Use (MoITU) in healthcare delivery. Stata SE version 15 was used for the analysis. A p-value of less than 0.05 served as the basis for considering a statistically significant accepting hypothesis. Results: Healthcare professionals (HCPs) generally perceived moderate benefits (Mean score (M)=5.67) from information technology (IT) in healthcare. However, they slightly agreed that barriers like insufficient computers (M=5.11), frequent system downtime (M=5.09), low system performance (M=5.04), and inadequate staff training (M=4.88) hindered IT utilization. Respondents slightly agreed that training (M=5.56), technical support (M=5.46), and changes in work procedures (M=5.10) motivated their IT use. Bivariate regression analysis revealed significant influences of education, working experience, healthcare profession, and IT training on attitudes towards IT utilization in healthcare delivery (BoIT, BITU, and MoITU). Additionally, the age of healthcare providers, education, and working experience significantly influenced BITU. Ultimately, age, education, working experience, healthcare profession, and IT training significantly influenced MoITU in healthcare delivery. Conclusions: Healthcare professionals acknowledge moderate benefits of IT in healthcare but encounter barriers like inadequate resources and training. Motives for IT use include staff training and support. Bivariate regression analysis shows education, working experience, profession, and IT training significantly influence attitudes toward IT adoption. Targeted interventions and policies can enhance IT utilization in the Cape Coast Metropolis, Ghana.Keywords: health information application, utilization of information application, information technology use, healthcare
Procedia PDF Downloads 657042 Machine Learning Approach in Predicting Cracking Performance of Fiber Reinforced Asphalt Concrete Materials
Authors: Behzad Behnia, Noah LaRussa-Trott
Abstract:
In recent years, fibers have been successfully used as an additive to reinforce asphalt concrete materials and to enhance the sustainability and resiliency of transportation infrastructure. Roads covered with fiber-reinforced asphalt concrete (FRAC) require less frequent maintenance and tend to have a longer lifespan. The present work investigates the application of sasobit-coated aramid fibers in asphalt pavements and employs machine learning to develop prediction models to evaluate the cracking performance of FRAC materials. For the experimental part of the study, the effects of several important parameters such as fiber content, fiber length, and testing temperature on fracture characteristics of FRAC mixtures were thoroughly investigated. Two mechanical performance tests, i.e., the disk-shaped compact tension [DC(T)] and indirect tensile [ID(T)] strength tests, as well as the non-destructive acoustic emission test, were utilized to experimentally measure the cracking behavior of the FRAC material in both macro and micro level, respectively. The experimental results were used to train the supervised machine learning approach in order to establish prediction models for fracture performance of the FRAC mixtures in the field. Experimental results demonstrated that adding fibers improved the overall fracture performance of asphalt concrete materials by increasing their fracture energy, tensile strength and lowering their 'embrittlement temperature'. FRAC mixtures containing long-size fibers exhibited better cracking performance than regular-size fiber mixtures. The developed prediction models of this study could be easily employed by pavement engineers in the assessment of the FRAC pavements.Keywords: fiber reinforced asphalt concrete, machine learning, cracking performance tests, prediction model
Procedia PDF Downloads 1417041 River Catchment’s Demography and the Dynamics of Access to Clean Water in the Rural South Africa
Authors: Yiseyon Sunday Hosu, Motebang Dominic Vincent Nakin, Elphina N. Cishe
Abstract:
Universal access to clean and safe drinking water and basic sanitation is one of the targets of the 6th Sustainable Development Goals (SDGs). This paper explores the evidence-based indicators of Water Rights Acts (2013) among households in the rural communities in the Mthatha River catchment of OR Tambo District Municipality of South Africa. Daily access to minimum 25 litres/person and the factors influencing clean water access were investigated in the catchment. A total number of 420 households were surveyed in the upper, peri-urban, lower and coastal regions of Mthatha Rivier catchment. Descriptive and logistic regression analyses were conducted on the data collected from the households to elicit vital information on domestic water security among rural community dwellers. The results show that approximately 68 percent of total households surveyed have access to the required minimum 25 litre/person/day, with 66.3 percent in upper region, 76 per cent in the peri-urban, 1.1 percent in the lower and 2.3 percent in the coastal regions. Only 30 percent among the total surveyed households had access to piped water either in the house or public taps. The logistic regression showed that access to clean water was influenced by lack of water infrastructure, proximity to urban regions, daily flow of pipe-borne water, household size and distance to public taps. This paper recommends that viable integrated rural community-based water infrastructure provision strategies between NGOs and local authority and the promotion of point of use (POU) technologies to enhance better access to clean water.Keywords: domestic water, household technology, water security, rural community
Procedia PDF Downloads 3537040 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 2327039 Ethanol in Carbon Monoxide Intoxication: Focus on Delayed Neuropsychological Sequelae
Authors: Hyuk-Hoon Kim, Young Gi Min
Abstract:
Background: In carbon monoxide (CO) intoxication, the pathophysiology of delayed neurological sequelae (DNS) is very complex and remains poorly understood. And predicting whether patients who exhibit resolved acute symptoms have escaped or will experience DNS represents a very important clinical issue. Brain magnetic resonance (MR) imaging has been conducted to assess the severity of brain damage as an objective method to predict prognosis. And co-ingestion of a second poison in patients with intentional CO poisoning occurs in almost one-half of patients. Among patients with co-ingestions, 66% ingested ethanol. We assessed the effects of ethanol on neurologic sequelae prevalence in acute CO intoxication by means of abnormal lesion in brain MR. Method: This study was conducted retrospectively by collecting data for patients who visited an emergency medical center during a period of 5 years. The enrollment criteria were diagnosis of acute CO poisoning and the measurement of the serum ethanol level and history of taking a brain MR during admission period. Official readout data by radiologist are used to decide whether abnormal lesion is existed or not. The enrolled patients were divided into two groups: patients with abnormal lesion and without abnormal lesion in Brain MR. A standardized extraction using medical record was performed; Mann Whitney U test and logistic regression analysis were performed. Result: A total of 112 patients were enrolled, and 68 patients presented abnormal brain lesion on MR. The abnormal brain lesion group had lower serum ethanol level (mean, 20.14 vs 46.71 mg/dL) (p-value<0.001). In addition, univariate logistic regression analysis showed the serum ethanol level (OR, 0.99; 95% CI, 0.98 -1.00) was independently associated with the development of abnormal lesion in brain MR. Conclusion: Ethanol could have neuroprotective effect in acute CO intoxication by sedative effect in stressful situation and mitigative effect in neuro-inflammatory reaction.Keywords: carbon monoxide, delayed neuropsychological sequelae, ethanol, intoxication, magnetic resonance
Procedia PDF Downloads 2527038 Recurrent Neural Networks with Deep Hierarchical Mixed Structures for Chinese Document Classification
Authors: Zhaoxin Luo, Michael Zhu
Abstract:
In natural languages, there are always complex semantic hierarchies. Obtaining the feature representation based on these complex semantic hierarchies becomes the key to the success of the model. Several RNN models have recently been proposed to use latent indicators to obtain the hierarchical structure of documents. However, the model that only uses a single-layer latent indicator cannot achieve the true hierarchical structure of the language, especially a complex language like Chinese. In this paper, we propose a deep layered model that stacks arbitrarily many RNN layers equipped with latent indicators. After using EM and training it hierarchically, our model solves the computational problem of stacking RNN layers and makes it possible to stack arbitrarily many RNN layers. Our deep hierarchical model not only achieves comparable results to large pre-trained models on the Chinese short text classification problem but also achieves state of art results on the Chinese long text classification problem.Keywords: nature language processing, recurrent neural network, hierarchical structure, document classification, Chinese
Procedia PDF Downloads 687037 A Practical Survey on Zero-Shot Prompt Design for In-Context Learning
Authors: Yinheng Li
Abstract:
The remarkable advancements in large language models (LLMs) have brought about significant improvements in natural language processing tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single ”best” prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various Natural Language Processing (NLP) tasks.Keywords: in-context learning, prompt engineering, zero-shot learning, large language models
Procedia PDF Downloads 837036 The Potential of 48V HEV in Real Driving
Authors: Mark Schudeleit, Christian Sieg, Ferit Küçükay
Abstract:
This paper describes how to dimension the electric components of a 48V hybrid system considering real customer use. Furthermore, it provides information about savings in energy and CO2 emissions by a customer-tailored 48V hybrid. Based on measured customer profiles, the electric units such as the electric motor and the energy storage are dimensioned. Furthermore, the CO2 reduction potential in real customer use is determined compared to conventional vehicles. Finally, investigations are carried out to specify the topology design and preliminary considerations in order to hybridize a conventional vehicle with a 48V hybrid system. The emission model results from an empiric approach also taking into account the effects of engine dynamics on emissions. We analyzed transient engine emissions during representative customer driving profiles and created emission meta models. The investigation showed a significant difference in emissions when simulating realistic customer driving profiles using the created verified meta models compared to static approaches which are commonly used for vehicle simulation.Keywords: customer use, dimensioning, hybrid electric vehicles, vehicle simulation, 48V hybrid system
Procedia PDF Downloads 5087035 A Recognition Method of Ancient Yi Script Based on Deep Learning
Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma
Abstract:
Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.Keywords: recognition, CNN, Yi character, divergence
Procedia PDF Downloads 1657034 Web Map Service for Fragmentary Rockfall Inventory
Authors: M. Amparo Nunez-Andres, Nieves Lantada
Abstract:
One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.Keywords: geological risk, web mapping, WMS, rockfalls
Procedia PDF Downloads 160