Search results for: atomic models
5850 Investigation of Unconventional Fuels in Co-Axial Engines
Authors: Arya Pirooz
Abstract:
The effects of different fuels (DME, RME B100, and SME B100) on barrel engines were studied as a general, single dimensional investigation for characterization of these types of engines. A base computational model was created as reference point to be used as a point of comparison with different cases. The models were computed using the commercial computational fluid dynamics program, Diesel-RK. The base model was created using basic dimensions of the PAMAR-3 engine with inline unit injectors. Four fuel cases were considered. Optimized models were also considered for diesel and DME cases with respect to injection duration, fuel, injection timing, exhaust and intake port opening, CR, angular offset. These factors were optimized for highest BMEP, combined PM and NOx emissions, and highest SFC. Results included mechanical efficiency (eta_m), efficiency and power, emission characteristics, combustion characteristics. DME proved to have the highest performing characteristics in relation to diesel and RME fuels for this type of barrel engine.Keywords: DME, RME, Diesel-RK, characterization, inline unit injector
Procedia PDF Downloads 4755849 Modeling and Optimization of Performance of Four Stroke Spark Ignition Injector Engine
Authors: A. A. Okafor, C. H. Achebe, J. L. Chukwuneke, C. G. Ozoegwu
Abstract:
The performance of an engine whose basic design parameters are known can be predicted with the assistance of simulation programs into the less time, cost and near value of actual. This paper presents a comprehensive mathematical model of the performance parameters of four stroke spark ignition engine. The essence of this research work is to develop a mathematical model for the analysis of engine performance parameters of four stroke spark ignition engine before embarking on full scale construction, this will ensure that only optimal parameters are in the design and development of an engine and also allow to check and develop the design of the engine and it’s operation alternatives in an inexpensive way and less time, instead of using experimental method which requires costly research test beds. To achieve this, equations were derived which describe the performance parameters (sfc, thermal efficiency, mep and A/F). The equations were used to simulate and optimize the engine performance of the model for various engine speeds. The optimal values obtained for the developed bivariate mathematical models are: sfc is 0.2833kg/kwh, efficiency is 28.77% and a/f is 20.75.Keywords: bivariate models, engine performance, injector engine, optimization, performance parameters, simulation, spark ignition
Procedia PDF Downloads 3235848 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams
Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin
Abstract:
Fire incidents have been steadily increased over the last year according to national emergency management agency of South Korea. Even though most of the fire incidents with property damage have been occurred in building, rehabilitation has not been properly done with consideration of structure safety. Therefore, this study aims at evaluating rehabilitation effects on fire damaged normal strength concrete beams through experiments and finite element analyses. For the experiments, reinforced concrete beams were fabricated having designed concrete strength of 21 MPa. Two different cover thicknesses were used as 40 mm and 50 mm. After cured, the fabricated beams were heated for 1hour or 2hours according to ISO-834 standard time-temperature curve. Rehabilitation was done by removing the damaged part of cover thickness and filling polymeric mortar into the removed part. Both fire damaged beams and rehabilitated beams were tested with four point loading system to observe structural behaviors and the rehabilitation effect. To verify the experiment, finite element (FE) models for structural analysis were generated using commercial software ABAQUS 6.10-3. For the rehabilitated beam models, integrated temperature-structural analyses were performed in advance to obtain geometries of the fire damaged beams. In addition to the fire damaged beam models, rehabilitated part was added with material properties of polymeric mortar. Three dimensional continuum brick elements were used for both temperature and structural analyses. The same loading and boundary conditions as experiments were implemented to the rehabilitated beam models and non-linear geometrical analyses were performed. Test results showed that maximum loads of the rehabilitated beams were 8~10% higher than those of the non-rehabilitated beams and even 1~6 % higher than those of the non-fire damaged beam. Stiffness of the rehabilitated beams were also larger than that of non-rehabilitated beams but smaller than that of the non-fire damaged beams. In addition, predicted structural behaviors from the analyses also showed good rehabilitation effect and the predicted load-deflection curves were similar to the experimental results. From this study, both experiments and analytical results demonstrated good rehabilitation effect on the fire damaged normal strength concrete beams. For the further, the proposed analytical method can be used to predict structural behaviors of rehabilitated and fire damaged concrete beams accurately without suffering from time and cost consuming experimental process.Keywords: fire, normal strength concrete, rehabilitation, reinforced concrete beam
Procedia PDF Downloads 5075847 Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu
Authors: Ammarah Irum, Muhammad Ali Tahir
Abstract:
Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively.Keywords: urdu sentiment analysis, deep learning, natural language processing, opinion mining, low-resource language
Procedia PDF Downloads 705846 Effects of Copper and Cobalt Co-Doping on Structural, Optical and Electrical Properties of Tio2 Thin Films Prepared by Sol Gel Method
Authors: Rabah Bensaha, Badreeddine Toubal
Abstract:
Un-doped TiO2, Co single doped TiO2 and (Cu-Co) co-doped TiO2 thin films have been growth on silicon substrates by the sol-gel dip coating technique. We mainly investigated both effects of the dopants and annealing temperature on the structural, optical and electrical properties of TiO2 films using X-ray diffraction (XRD), Raman and FTIR spectroscopy, Atomic force microscopy (AFM), Scanning electron microscopy (SEM), UV–Vis spectroscopy. The chemical compositions of Co-doped and (Cu-Co) co-doped TiO2 films were confirmed by XRD, Raman and FTIR studies. The average grain sizes of CoTiO3-TiO2 nanocomposites were increased with annealing temperature. AFM and SEM reveal a completely the various nanostructures of CoTiO3-TiO2 nanocomposites thin films. The films exhibit a high optical reflectance with a large band gap. The highest electrical conductivity was obtained for the (Cu-Co) co-doped TiO2 films. The polyhedral surface morphology might possibly improve the surface contact between particle sizes and then contribute to better electron mobility as well as conductivity. The obtained results suggest that the prepared TiO2 films can be used for optoelectronic applications.Keywords: sol-gel, TiO2 thin films, CoTiO3-TiO2 nanocomposites films, Electrical conductivity
Procedia PDF Downloads 4415845 Distributed Manufacturing (DM)- Smart Units and Collaborative Processes
Authors: Hermann Kuehnle
Abstract:
Developments in ICT totally reshape manufacturing as machines, objects and equipment on the shop floors will be smart and online. Interactions with virtualizations and models of a manufacturing unit will appear exactly as interactions with the unit itself. These virtualizations may be driven by providers with novel ICT services on demand that might jeopardize even well established business models. Context aware equipment, autonomous orders, scalable machine capacity or networkable manufacturing unit will be the terminology to get familiar with in manufacturing and manufacturing management. Such newly appearing smart abilities with impact on network behavior, collaboration procedures and human resource development will make distributed manufacturing a preferred model to produce. Computing miniaturization and smart devices revolutionize manufacturing set ups, as virtualizations and atomization of resources unwrap novel manufacturing principles. Processes and resources obey novel specific laws and have strategic impact on manufacturing and major operational implications. Mechanisms from distributed manufacturing engaging interacting smart manufacturing units and decentralized planning and decision procedures already demonstrate important effects from this shift of focus towards collaboration and interoperability.Keywords: autonomous unit, networkability, smart manufacturing unit, virtualization
Procedia PDF Downloads 5255844 Design Systems and the Need for a Usability Method: Assessing the Fitness of Components and Interaction Patterns in Design Systems Using Atmosphere Methodology
Authors: Patrik Johansson, Selina Mardh
Abstract:
The present study proposes a usability test method, Atmosphere, to assess the fitness of components and interaction patterns of design systems. The method covers the user’s perception of the components of the system, the efficiency of the logic of the interaction patterns, perceived ease of use as well as the user’s understanding of the intended outcome of interactions. These aspects are assessed by combining measures of first impression, visual affordance and expectancy. The method was applied to a design system developed for the design of an electronic health record system. The study was conducted involving 15 healthcare personnel. It could be concluded that the Atmosphere method provides tangible data that enable human-computer interaction practitioners to analyze and categorize components and patterns based on perceived usability, success rate of identifying interactive components and success rate of understanding components and interaction patterns intended outcome.Keywords: atomic design, atmosphere methodology, design system, expectancy testing, first impression testing, usability testing, visual affordance testing
Procedia PDF Downloads 1785843 Levels of Selected Heavy Metals in Varieties of Vegetable oils Consumed in Kingdom of Saudi Arabia and Health Risk Assessment of Local Population
Authors: Muhammad Waqar Ashraf
Abstract:
Selected heavy metals, namely Cu, Zn, Fe, Mn, Cd, Pb, and As, in seven popular varieties of edible vegetable oils collected from Saudi Arabia, were determined by graphite furnace atomic absorption spectrometry (GF-AAS) using microwave digestion. The accuracy of procedure was confirmed by certified reference materials (NIST 1577b). The concentrations for copper, zinc, iron, manganese, lead and arsenic were observed in the range of 0.035 - 0.286, 0.955 - 3.10, 17.3 - 57.8, 0.178 - 0.586, 0.011 - 0.017 and 0.011 - 0.018 µg/g, respectively. Cadmium was found to be in the range of 2.36 - 6.34 ng/g. The results are compared internationally and with standards laid down by world health agencies. A risk assessment study has been carried out to assess exposure to these metals via consumption of vegetable oils. A comparison has been made with safety intake levels for these heavy metals recommended by Institute of Medicine of the National Academies (IOM), US Environmental Protection Agency (US EPA) and Joint FAO/WHO Expert Committee on Food Additives (JECFA). The results indicated that the dietary intakes of the selected heavy metals from daily consumption of 25 g of edible vegetable oils for a 70 kg individual should pose no significant health risk to local population.Keywords: vegetable oils, heavy metals, contamination, health risk assessment
Procedia PDF Downloads 4495842 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour
Authors: Hayder H. Kareem, Shunqi Pan
Abstract:
The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW
Procedia PDF Downloads 1825841 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: gendered grammar, misogynistic language, natural language processing, neural networks
Procedia PDF Downloads 1185840 Longitudinal Study of the Phenomenon of Acting White in Hungarian Elementary Schools Analysed by Fixed and Random Effects Models
Authors: Lilla Dorina Habsz, Marta Rado
Abstract:
Popularity is affected by a variety of factors in the primary school such as academic achievement and ethnicity. The main goal of our study was to analyse whether acting white exists in Hungarian elementary schools. In other words, we observed whether Roma students penalize those in-group members who obtain the high academic achievement. Furthermore, to show how popularity is influenced by changes in academic achievement in inter-ethnic relations. The empirical basis of our research was the 'competition and negative networks' longitudinal dataset, which was collected by the MTA TK 'Lendület' RECENS research group. This research followed 11 and 12-year old students for a two-year period. The survey was analysed using fixed and random effect models. Overall, we found a positive correlation between grades and popularity, but no evidence for the acting white effect. However, better grades were more positively evaluated within the majority group than within the minority group, which may further increase inequalities.Keywords: academic achievement, elementary school, ethnicity, popularity
Procedia PDF Downloads 2005839 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model
Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle
Abstract:
In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model
Procedia PDF Downloads 1015838 The Comparison of Chromium Ions Release Stainless Steel 18-8 between Artificial Saliva and Black Tea Leaves Extracts
Authors: Nety Trisnawaty, Mirna Febriani
Abstract:
The use of stainless steel wires in the field of dentistry is widely used, especially for orthodontic and prosthodontic treatment using stainless steel wire. The oral cavity is the ideal environment for corrosion, which can be caused by saliva. Prevention of corrosion on stainless steel wires can be done by using an organic or non-organic corrosion inhibitor. One of the organic inhibitors that can be used to prevent corrosion is black tea leaves extracts. To explain the comparison of chromium ions release for stainlees steel between artificial saliva and black tea leaves extracts. In this research we used artificial saliva, black tea leaves extracts, stainless steel wire and using Atomic Absorption Spectrophometric testing machine. The samples were soaked for 1, 3, 7 and 14 days in the artificial saliva and black tea leaves extracts. The results showed the difference of chromium ion release soaked in artificial saliva and black tea leaves extracts on days 1, 3, 7 and 14. Statistically, calculation with independent T-test with p < 0,05 showed a significant difference. The longer the duration of days, the more ion chromium were released. The conclusion of this study shows that black tea leaves extracts can inhibit the corrosion rate of stainless steel wires.Keywords: chromium ion, stainless steel, artificial saliva, black tea leaves extracts
Procedia PDF Downloads 2785837 Applying the Crystal Model Approach on Light Nuclei for Calculating Radii and Density Distribution
Authors: A. Amar
Abstract:
A new model, namely the crystal model, has been modified to calculate the radius and density distribution of light nuclei up to ⁸Be. The crystal model has been modified according to solid-state physics, which uses the analogy between nucleon distribution and atoms distribution in the crystal. The model has analytical analysis to calculate the radius where the density distribution of light nuclei has obtained from analogy of crystal lattice. The distribution of nucleons over crystal has been discussed in a general form. The equation that has been used to calculate binding energy was taken from the solid-state model of repulsive and attractive force. The numbers of the protons were taken to control repulsive force, where the atomic number was responsible for the attractive force. The parameter has been calculated from the crystal model was found to be proportional to the radius of the nucleus. The density distribution of light nuclei was taken as a summation of two clusters distribution as in ⁶Li=alpha+deuteron configuration. A test has been done on the data obtained for radius and density distribution using double folding for d+⁶,⁷Li with M3Y nucleon-nucleon interaction. Good agreement has been obtained for both the radius and density distribution of light nuclei. The model failed to calculate the radius of ⁹Be, so modifications should be done to overcome discrepancy.Keywords: nuclear physics, nuclear lattice, study nucleus as crystal, light nuclei till to ⁸Be
Procedia PDF Downloads 1745836 A Study on Leaching of Toxic Elements of High Strength Concrete Containing Waste Cathode Ray Tube Glass as Coarse Aggregate
Authors: Nurul Noraziemah Mohd Pauzi, Muhammad Fauzi Mohd Zain
Abstract:
The rapid advance in the electronic industry has led to the increase amount of the waste cathode ray tube (CRT) devices. The management of CRT waste upon disposal haves become a major issue of environmental concern as it contains toxic elements (i.e. lead, barium, zinc, etc.) which has a risk of leaching if it is not managed appropriately. Past studies have reported regarding the possible use of CRT glass as a part of aggregate in concrete production. However, incorporating waste CRT glass may present an environmental risk via leachability of toxic elements. Accordingly, the preventive measures for reducing the risk was proposed. The current work presented the experimental results regarding potential leaching of toxic elements from four types of concrete mixed, each compromising waste CRT glass as coarse aggregate with different shape and properties. Concentrations of detected elements are measure in the leachates by using atomic absorption spectrometry (AAS). Results indicate that the concentration of detected elements were found to be below applicable risk, despite the higher content of toxic elements in CRT glass. Therefore, the used of waste CRT glass as coarse aggregate in hardened concrete does not pose any risk of leachate of heavy metals to the environment.Keywords: recycled CRT glass, coarse aggregate, physical properties, leaching, toxic elements
Procedia PDF Downloads 3565835 Investigating Data Normalization Techniques in Swarm Intelligence Forecasting for Energy Commodity Spot Price
Authors: Yuhanis Yusof, Zuriani Mustaffa, Siti Sakira Kamaruddin
Abstract:
Data mining is a fundamental technique in identifying patterns from large data sets. The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical. Prior to that, data are consolidated so that the resulting mining process may be more efficient. This study investigates the effect of different data normalization techniques, which are Min-max, Z-score, and decimal scaling, on Swarm-based forecasting models. Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC). Forecasting models are later developed to predict the daily spot price of crude oil and gasoline. Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max. Nevertheless, the GWO is more superior that ABC as its model generates the highest accuracy for both crude oil and gasoline price. Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms.Keywords: artificial bee colony, data normalization, forecasting, Grey Wolf optimizer
Procedia PDF Downloads 4755834 Diagnostics and Explanation of the Current Status of the 40- Year Railway Viaduct
Authors: Jakub Zembrzuski, Bartosz Sobczyk, Mikołaj MIśkiewicz
Abstract:
Besides designing new constructions, engineers all over the world must face another problem – maintenance, repairs, and assessment of the technical condition of existing bridges. To solve more complex issues, it is necessary to be familiar with the theory of finite element method and to have access to the software that provides sufficient tools which to enable create of sometimes significantly advanced numerical models. The paper includes a brief assessment of the technical condition, a description of the in situ non-destructive testing carried out and the FEM models created for global and local analysis. In situ testing was performed using strain gauges and displacement sensors. Numerical models were created using various software and numerical modeling techniques. Particularly noteworthy is the method of modeling riveted joints of the crossbeam of the viaduct. It is a simplified method that consists of the use of only basic numerical tools such as beam and shell finite elements, constraints, and simplified boundary conditions (fixed support and symmetry). The results of the numerical analyses were presented and discussed. It is clearly explained why the structure did not fail, despite the fact that the weld of the deck plate completely failed. A further research problem that was solved was to determine the cause of the rapid increase in values on the stress diagram in the cross-section of the transverse section. The problems were solved using the solely mentioned, simplified method of modeling riveted joints, which demonstrates that it is possible to solve such problems without access to sophisticated software that enables to performance of the advanced nonlinear analysis. Moreover, the obtained results are of great importance in the field of assessing the operation of bridge structures with an orthotropic plate.Keywords: bridge, diagnostics, FEM simulations, failure, NDT, in situ testing
Procedia PDF Downloads 715833 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques
Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev
Abstract:
Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.Keywords: data analysis, demand modeling, healthcare, medical facilities
Procedia PDF Downloads 1445832 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs
Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa
Abstract:
Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.Keywords: classification models, egg weight, fertilised eggs, multiple linear regression
Procedia PDF Downloads 865831 Factors Affecting M-Government Deployment and Adoption
Authors: Saif Obaid Alkaabi, Nabil Ayad
Abstract:
Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.Keywords: e-government, m-government, system dependability, system security, trust
Procedia PDF Downloads 3805830 The Future of Insurance: P2P Innovation versus Traditional Business Model
Authors: Ivan Sosa Gomez
Abstract:
Digitalization has impacted the entire insurance value chain, and the growing movement towards P2P platforms and the collaborative economy is also beginning to have a significant impact. P2P insurance is defined as innovation, enabling policyholders to pool their capital, self-organize, and self-manage their own insurance. In this context, new InsurTech start-ups are emerging as peer-to-peer (P2P) providers, based on a model that differs from traditional insurance. As a result, although P2P platforms do not change the fundamental basis of insurance, they do enable potentially more efficient business models to be established in terms of ensuring the coverage of risk. It is therefore relevant to determine whether p2p innovation can have substantial effects on the future of the insurance sector. For this purpose, it is considered necessary to develop P2P innovation from a business perspective, as well as to build a comparison between a traditional model and a P2P model from an actuarial perspective. Objectives: The objectives are (1) to represent P2P innovation in the business model compared to the traditional insurance model and (2) to establish a comparison between a traditional model and a P2P model from an actuarial perspective. Methodology: The research design is defined as action research in terms of understanding and solving the problems of a collectivity linked to an environment, applying theory and best practices according to the approach. For this purpose, the study is carried out through the participatory variant, which involves the collaboration of the participants, given that in this design, participants are considered experts. For this purpose, prolonged immersion in the field is carried out as the main instrument for data collection. Finally, an actuarial model is developed relating to the calculation of premiums that allows for the establishment of projections of future scenarios and the generation of conclusions between the two models. Main Contributions: From an actuarial and business perspective, we aim to contribute by developing a comparison of the two models in the coverage of risk in order to determine whether P2P innovation can have substantial effects on the future of the insurance sector.Keywords: Insurtech, innovation, business model, P2P, insurance
Procedia PDF Downloads 925829 Machine Learning Approach in Predicting Cracking Performance of Fiber Reinforced Asphalt Concrete Materials
Authors: Behzad Behnia, Noah LaRussa-Trott
Abstract:
In recent years, fibers have been successfully used as an additive to reinforce asphalt concrete materials and to enhance the sustainability and resiliency of transportation infrastructure. Roads covered with fiber-reinforced asphalt concrete (FRAC) require less frequent maintenance and tend to have a longer lifespan. The present work investigates the application of sasobit-coated aramid fibers in asphalt pavements and employs machine learning to develop prediction models to evaluate the cracking performance of FRAC materials. For the experimental part of the study, the effects of several important parameters such as fiber content, fiber length, and testing temperature on fracture characteristics of FRAC mixtures were thoroughly investigated. Two mechanical performance tests, i.e., the disk-shaped compact tension [DC(T)] and indirect tensile [ID(T)] strength tests, as well as the non-destructive acoustic emission test, were utilized to experimentally measure the cracking behavior of the FRAC material in both macro and micro level, respectively. The experimental results were used to train the supervised machine learning approach in order to establish prediction models for fracture performance of the FRAC mixtures in the field. Experimental results demonstrated that adding fibers improved the overall fracture performance of asphalt concrete materials by increasing their fracture energy, tensile strength and lowering their 'embrittlement temperature'. FRAC mixtures containing long-size fibers exhibited better cracking performance than regular-size fiber mixtures. The developed prediction models of this study could be easily employed by pavement engineers in the assessment of the FRAC pavements.Keywords: fiber reinforced asphalt concrete, machine learning, cracking performance tests, prediction model
Procedia PDF Downloads 1395828 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 2305827 Coagulation-Flocculation of Palm Oil Mill Effluent from Pertubuhan Peladang Negeri Johor, Malaysia
Authors: A. H. Jagaba, Musa Babayo, Ab Aziz Abdul Latiff, Sule Abubakar, I. M. Lawal, Isa Zubairu, M. A. Nasara
Abstract:
Wastewater containing heavy metals is of extreme importance globally because of its potential threat to both the aquatic ecosystem and the soil environment. Heavy metal is hazardous even at low concentration and thereby causing various forms of diseases. One method which has been tested and found to be effective for heavy metals removal is coagulation-flocculation. For the coagulation process of POME obtained from Pertubuhan Peladang Negeri Johor (PPNJ), Oil Palm Mill Company located in Kahang area of Kluang, Johor Darul Takzim, Malaysia, diffèrent coagulants would be used to absorb and then separate the metals from wastewater. The determination of heavy metals concentration in POME was carried out using an inductively coupled plasma (ICP) and an Atomic Absorption Spectrometer (AAS). Results of the study showed that alum coagulant was successful in effectively reducing Cu, Cd, and Mn from 0.840 mg/l, 0.00509 mg/l and 8.191 mg/l to as low as 0.107 mg/l, 0.000270 mg/l and 0.612 mg/l respectively. All were obtained at a dose of 1000 mg/l. 1000 mg/l dose of ferric chloride reduced Pb concentration from 0.0248 mg/l to 0.00151 mg/l. Chitosan was best at reducing Fe and Zn from 62.91 mg/l and 3.616 mg/l to 6.003 mg/l and 0.595 mg/l all at a dose of 400 mg/l.Keywords: palm oil mill effluent, coagulation, heavy metals, Pertubuhan Peladang Negeri Johor, Malaysia
Procedia PDF Downloads 2235826 Recurrent Neural Networks with Deep Hierarchical Mixed Structures for Chinese Document Classification
Authors: Zhaoxin Luo, Michael Zhu
Abstract:
In natural languages, there are always complex semantic hierarchies. Obtaining the feature representation based on these complex semantic hierarchies becomes the key to the success of the model. Several RNN models have recently been proposed to use latent indicators to obtain the hierarchical structure of documents. However, the model that only uses a single-layer latent indicator cannot achieve the true hierarchical structure of the language, especially a complex language like Chinese. In this paper, we propose a deep layered model that stacks arbitrarily many RNN layers equipped with latent indicators. After using EM and training it hierarchically, our model solves the computational problem of stacking RNN layers and makes it possible to stack arbitrarily many RNN layers. Our deep hierarchical model not only achieves comparable results to large pre-trained models on the Chinese short text classification problem but also achieves state of art results on the Chinese long text classification problem.Keywords: nature language processing, recurrent neural network, hierarchical structure, document classification, Chinese
Procedia PDF Downloads 655825 Effect of Concentration Level and Moisture Content on the Detection and Quantification of Nickel in Clay Agricultural Soil in Lebanon
Authors: Layan Moussa, Darine Salam, Samir Mustapha
Abstract:
Heavy metal contamination in agricultural soils in Lebanon poses serious environmental and health problems. Intensive efforts are employed to improve existing quantification methods of heavy metals in contaminated environments since conventional detection techniques have shown to be time-consuming, tedious, and costly. The implication of hyperspectral remote sensing in this field is possible and promising. However, factors impacting the efficiency of hyperspectral imaging in detecting and quantifying heavy metals in agricultural soils were not thoroughly studied. This study proposes to assess the use of hyperspectral imaging for the detection of Ni in agricultural clay soil collected from the Bekaa Valley, a major agricultural area in Lebanon, under different contamination levels and soil moisture content. Soil samples were contaminated with Ni, with concentrations ranging from 150 mg/kg to 4000 mg/kg. On the other hand, soil with background contamination was subjected to increased moisture levels varying from 5 to 75%. Hyperspectral imaging was used to detect and quantify Ni contamination in the soil at different contamination levels and moisture content. IBM SPSS statistical software was used to develop models that predict the concentration of Ni and moisture content in agricultural soil. The models were constructed using linear regression algorithms. The spectral curves obtained reflected an inverse correlation between both Ni concentration and moisture content with respect to reflectance. On the other hand, the models developed resulted in high values of predicted R2 of 0.763 for Ni concentration and 0.854 for moisture content. Those predictions stated that Ni presence was well expressed near 2200 nm and that of moisture was at 1900 nm. The results from this study would allow us to define the potential of using the hyperspectral imaging (HSI) technique as a reliable and cost-effective alternative for heavy metal pollution detection in contaminated soils and soil moisture prediction.Keywords: heavy metals, hyperspectral imaging, moisture content, soil contamination
Procedia PDF Downloads 995824 Physics of Black Holes. A Closed Cycle of Transformation of Matter in the Universe
Authors: Igor V. Kuzminov
Abstract:
The proposed article is a development of the topics of gravity, the inverse temperature dependence of gravity, the action of the inverse temperature dependence of gravity, and the second law of thermodynamics, dark matter, the identity of gravity, inertial forces, and centrifugal forces. All interaction schemes are built on the basis of Newton's laws of classical mechanics and Rutherford's planetary model of the structure of the atom. The basis of all constructions is the gyroscopic effect of rotation of all particles of the atomic structure. In this case, interatomic and intermolecular bonds are accepted as the static part of the gyroscope, and the rotation of an electron in an atom is accepted as the dynamic part. The structure of the planet Earth is accepted as a model of the structure of the Black Hole. Namely, gravitational and thermodynamic phenomena in the structure of the planet Earth are accepted as a model. Based on this model, assumptions are made about the processes inside the Black Hole. Moreover, a version is put forward, a scheme of a closed cycle of transformation of matter in the Universe.Keywords: black hole, gravity, inverse temperature dependence of gravitational forces, second law of thermodynamics, gyroscopic effect, dark matter
Procedia PDF Downloads 225823 A Practical Survey on Zero-Shot Prompt Design for In-Context Learning
Authors: Yinheng Li
Abstract:
The remarkable advancements in large language models (LLMs) have brought about significant improvements in natural language processing tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single ”best” prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various Natural Language Processing (NLP) tasks.Keywords: in-context learning, prompt engineering, zero-shot learning, large language models
Procedia PDF Downloads 785822 The Potential of 48V HEV in Real Driving
Authors: Mark Schudeleit, Christian Sieg, Ferit Küçükay
Abstract:
This paper describes how to dimension the electric components of a 48V hybrid system considering real customer use. Furthermore, it provides information about savings in energy and CO2 emissions by a customer-tailored 48V hybrid. Based on measured customer profiles, the electric units such as the electric motor and the energy storage are dimensioned. Furthermore, the CO2 reduction potential in real customer use is determined compared to conventional vehicles. Finally, investigations are carried out to specify the topology design and preliminary considerations in order to hybridize a conventional vehicle with a 48V hybrid system. The emission model results from an empiric approach also taking into account the effects of engine dynamics on emissions. We analyzed transient engine emissions during representative customer driving profiles and created emission meta models. The investigation showed a significant difference in emissions when simulating realistic customer driving profiles using the created verified meta models compared to static approaches which are commonly used for vehicle simulation.Keywords: customer use, dimensioning, hybrid electric vehicles, vehicle simulation, 48V hybrid system
Procedia PDF Downloads 5055821 A Recognition Method of Ancient Yi Script Based on Deep Learning
Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma
Abstract:
Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.Keywords: recognition, CNN, Yi character, divergence
Procedia PDF Downloads 161