Search results for: incomplete data extrapolation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24456

Search results for: incomplete data extrapolation

24366 Evaluation of Corrosion Property of Aluminium-Zirconium Dioxide (AlZrO2) Nanocomposites

Authors: M. Ramachandra, G. Dilip Maruthi, R. Rashmi

Abstract:

This paper aims to study the corrosion property of aluminum matrix nanocomposite of an aluminum alloy (Al-6061) reinforced with zirconium dioxide (ZrO2) particles. The zirconium dioxide particles are synthesized by solution combustion method. The nanocomposite materials are prepared by mechanical stir casting method, varying the percentage of n-ZrO2 (2.5%, 5% and 7.5% by weight). The corrosion behavior of base metal (Al-6061) and Al/ZrO2 nanocomposite in seawater (3.5% NaCl solution) is measured using the potential control method. The corrosion rate is evaluated by Tafel extrapolation technique. The corrosion potential increases with the increase in wt.% of n-ZrO2 in the nanocomposite which means the decrease in corrosion rate. It is found that on addition of n-ZrO2 particles to the aluminum matrix, the corrosion rate has decreased compared to the base metal.

Keywords: Al6061 alloy, corrosion, solution, stir casting, combustion, potentiostat, zirconium dioxide

Procedia PDF Downloads 364
24365 Improved Safety Science: Utilizing a Design Hierarchy

Authors: Ulrica Pettersson

Abstract:

Collection of information on incidents is regularly done through pre-printed incident report forms. These tend to be incomplete and frequently lack essential information. ne consequence is that reports with inadequate information, that do not fulfil analysts’ requirements, are transferred into the analysis process. To improve an incident reporting form, theory in design science, witness psychology and interview and questionnaire research has been used. Previously three experiments have been conducted to evaluate the form and shown significant improved results. The form has proved to capture knowledge, regardless of the incidents’ character or context. The aim in this paper is to describe how design science, in more detail a design hierarchy can be used to construct a collection form for improvements in safety science.

Keywords: data collection, design science, incident reports, safety science

Procedia PDF Downloads 191
24364 Effectiveness of Weather Index Insurance for Smallholders in Ethiopia

Authors: Federica Di Marcantonio, Antoine Leblois, Wolfgang Göbel, Hervè Kerdiles

Abstract:

Weather-related shocks can threaten the ability of farmers to maintain their agricultural output and food security levels. Informal coping mechanisms (i.e. migration or community risk sharing) have always played a significant role in mitigating the negative effects of weather-related shocks in Ethiopia, but they have been found to be an incomplete strategy, particularly as a response to covariate shocks. Particularly, as an alternative to the traditional risk pooling products, an innovative form of insurance known as Index-based Insurance has received a lot of attention from researchers and international organizations, leading to an increased number of pilot initiatives in many countries. Despite the potential benefit of the product in protecting the livelihoods of farmers and pastoralists against climate shocks, to date there has been an unexpectedly low uptake. Using information from current pilot projects on index-based insurance in Ethiopia, this paper discusses the determinants of uptake that have so far undermined the scaling-up of the products, by focusing in particular on weather data availability, price affordability and willingness to pay. We found that, aside from data constraint issues, high price elasticity and low willingness to pay represent impediments to the development of the market. These results, bring us to rethink the role of index insurance as products for enhancing smallholders’ response to covariate shocks, and particularly for improving their food security.

Keywords: index-based insurance, willingness to pay, satellite information, Ethiopia

Procedia PDF Downloads 379
24363 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation

Authors: Matthias Leitner, Gernot Pottlacher

Abstract:

Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.

Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion

Procedia PDF Downloads 197
24362 Analysis of Extreme Case of Urban Heat Island Effect and Correlation with Global Warming

Authors: Kartikey Gupta

Abstract:

Global warming and environmental degradation are at their peak today, with the years after 2000A.D. giving way to 15 hottest years in terms of average temperatures. In India, much of the standard temperature measuring equipment are located in ‘developed’ urban areas, hence showing us an incomplete picture in terms of the climate across many rural areas, which comprises most of the landmass. This study showcases data studied by the author since 3 years at Vatsalya’s Children’s village, in outskirts of Jaipur, Rajasthan, India; in the midst of semi-arid topography, where consistently huge temperature differences of up to 15.8 degrees Celsius from local Jaipur weather only 30 kilometers away, are stunning yet scary at the same time, encouraging analysis of where the natural climatic pattern is heading due to rapid unrestricted urbanization. Record-breaking data presented in this project enforces the need to discuss causes and recovery techniques. This research further explores how and to what extent we are causing phenomenal disturbances in the natural meteorological pattern by urban growth. Detailed data observations using a standardized ambient weather station at study site and comparing it with closest airport weather data, evaluating the patterns and differences, show striking differences in temperatures, wind patterns and even rainfall quantity, especially during high-pressure zone days. Winter-time lows dip to 8 degrees below freezing with heavy frost and ice, while only 30 kms away minimum figures barely touch single-digit temperatures. Human activity is having an unprecedented effect on climatic patterns in record-breaking trends, which is a warning of what may follow in the next 15-25 years for the next generation living in cities, and a serious exploration into possible solutions is a must.

Keywords: climate change, meteorology, urban heat island, urbanization

Procedia PDF Downloads 60
24361 A Method of Effective Planning and Control of Industrial Facility Energy Consumption

Authors: Aleksandra Aleksandrovna Filimonova, Lev Sergeevich Kazarinov, Tatyana Aleksandrovna Barbasova

Abstract:

A method of effective planning and control of industrial facility energy consumption is offered. The method allows to optimally arrange the management and full control of complex production facilities in accordance with the criteria of minimal technical and economic losses at the forecasting control. The method is based on the optimal construction of the power efficiency characteristics with the prescribed accuracy. The problem of optimal designing of the forecasting model is solved on the basis of three criteria: maximizing the weighted sum of the points of forecasting with the prescribed accuracy; the solving of the problem by the standard principles at the incomplete statistic data on the basis of minimization of the regularized function; minimizing the technical and economic losses due to the forecasting errors.

Keywords: energy consumption, energy efficiency, energy management system, forecasting model, power efficiency characteristics

Procedia PDF Downloads 356
24360 Traumatic Spinal Cord Injury in King Fahd Medical City: An Epidemiological Study

Authors: Saeed Alshahri

Abstract:

Introduction: Our study aims to estimate the characteristics & causes of TSCI at King Fahad Medical City (KFMC) in Riyadh city in order to hypothesize strategy for primary prevention of traumatic spinal cord injury. Method: Cross-sectional, retrospective study was conducted on all TSCI patients who aged 14 and above and who were admitted to rehabilitation center of King Fahad Medical City from January 2012 to December 2015. Furthermore, a descriptive analysis was conducted while considering factors including age, gender, marital status, educational level and causes of injury and characteristics of injury. Results: Total of 216 patients were admitted during this period, mean age was 28.94, majority of patients were male (86.5%), 71.7% of total patients were high school level of education or less, 68% were single, RTA was the main cause with 90.7% and the main result of TSCI was complete paraplegia 37%. Furthermore, statistically, we found that males are at a low risk of having incomplete paraplegia compared to female (p = 0.035, RRR=0.35). Conclusion: The rate of TSCI related to RTA has increased in Saudi Arabia in previous years despite the government’s efforts to decrease RTA. It’s clear that we need TSCI registry data developed on the basis of international data standards to have a clear idea about the exact etiology of TSCI in Saudi Arabia. This will assist in planning for primary prevention.

Keywords: traumatic spinal cord injury, road traffic accident, Saudi Arabia, spinal cord injury

Procedia PDF Downloads 320
24359 Processing Big Data: An Approach Using Feature Selection

Authors: Nikat Parveen, M. Ananthi

Abstract:

Big data is one of the emerging technology, which collects the data from various sensors and those data will be used in many fields. Data retrieval is one of the major issue where there is a need to extract the exact data as per the need. In this paper, large amount of data set is processed by using the feature selection. Feature selection helps to choose the data which are actually needed to process and execute the task. The key value is the one which helps to point out exact data available in the storage space. Here the available data is streamed and R-Center is proposed to achieve this task.

Keywords: big data, key value, feature selection, retrieval, performance

Procedia PDF Downloads 313
24358 The Efficacy of Psycho-Education in Improving the Emotional Well-Being of Visually Impaired Adolescents in Nigeria

Authors: Janet Tolulope Olaseni

Abstract:

Emotional well-being in adolescents is an important psychological factor that can enhance positive living, but if it is not well groomed, it can have adverse impacts on their development. Therefore, the present study examined the efficacy of psycho-education on the emotional well-being of adolescents who are visually impaired in Nigeria. A total of twenty-eight (28) participants, which comprisednineteen (19) males and nine (9) females (M=15.82, SD=2.23) from a Nigerian School for the Blind, participated in the quasi-experimental study. Randomized clinical trial designwas used to assigned the participants into three (Complete Psycho-education, Incomplete Psycho-education, and No Psycho-education) groups. Standardized scales were used to gather data from the respondents. The formulated hypotheses were tested using Dependent T-Test and Analysis of Co-Variance. The results showed that there was a significant effect of Psycho-education on the emotional well-being of the Visually Impaired Adolescents. Those who received complete Psycho-educationhad the highest level of emotional well-being compared to those in the other groups. In order to enhance the emotional well-being of the Visually Impaired Adolescents, the study recommended that complete Psycho-education programme should be incorporated into the school activities of the Visually Impaired Adolescents.

Keywords: emotional well-being, psycho-education, visually impaired adolescents, Nigeria

Procedia PDF Downloads 83
24357 Investigation of Performance of Organic Acids on Carbonate Rocks (Experimental Study in Ahwaz Oilfield)

Authors: Azad Jarrahian, Ehsan Heidaryan

Abstract:

Matrix acidizing treatments can yield impressive production increase if properly applied. In this study, carbonate samples taken from Ahwaz Oilfield have undergone static solubility, sludge, emulsion, and core flooding tests. In each test interaction of acid and rock is reported and at the end it has been shown that how initial permeability and type of acid affects the overall treatment efficiency.

Keywords: carbonate acidizing, organic acids, spending rate, acid penetration, incomplete spending.

Procedia PDF Downloads 405
24356 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization

Authors: Yihao Kuang, Bowen Ding

Abstract:

With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graphs and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improved strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain a better and more efficient inference effect by introducing PPO into knowledge inference technology.

Keywords: reinforcement learning, PPO, knowledge inference

Procedia PDF Downloads 206
24355 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 377
24354 Dynamic Environmental Impact Study during the Construction of the French Nuclear Power Plants

Authors: A. Er-Raki, D. Hartmann, J. P. Belaud, S. Negny

Abstract:

This paper has a double purpose: firstly, a literature review of the life cycle analysis (LCA) and secondly a comparison between conventional (static) LCA and multi-level dynamic LCA on the following items: (i) inventories evolution with time (ii) temporal evolution of the databases. The first part of the paper summarizes the state of the art of the static LCA approach. The different static LCA limits have been identified and especially the non-consideration of the spatial and temporal evolution in the inventory, for the characterization factors (FCs) and into the databases. Then a description of the different levels of integration of the notion of temporality in life cycle analysis studies was made. In the second part, the dynamic inventory has been evaluated firstly for a single nuclear plant and secondly for the entire French nuclear power fleet by taking into account the construction durations of all the plants. In addition, the databases have been adapted by integrating the temporal variability of the French energy mix. Several iterations were used to converge towards the real environmental impact of the energy mix. Another adaptation of the databases to take into account the temporal evolution of the market data of the raw material was made. An identification of the energy mix of the time studied was based on an extrapolation of the production reference values of each means of production. An application to the construction of the French nuclear power plants from 1971 to 2000 has been performed, in which a dynamic inventory of raw material has been evaluated. Then the impacts were characterized by the ILCD 2011 characterization method. In order to compare with a purely static approach, a static impact assessment was made with the V 3.4 Ecoinvent data sheets without adaptation and a static inventory considering that all the power stations would have been built at the same time. Finally, a comparison between static and dynamic LCA approaches was set up to determine the gap between them for each of the two levels of integration. The results were analyzed to identify the contribution of the evolving nuclear power fleet construction to the total environmental impacts of the French energy mix during the same period. An equivalent strategy using a dynamic approach will further be applied to identify the environmental impacts that different scenarios of the energy transition could bring, allowing to choose the best energy mix from an environmental viewpoint.

Keywords: LCA, static, dynamic, inventory, construction, nuclear energy, energy mix, energy transition

Procedia PDF Downloads 83
24353 Localized Variabilities in Traffic-related Air Pollutant Concentrations Revealed Using Compact Sensor Networks

Authors: Eric A. Morris, Xia Liu, Yee Ka Wong, Greg J. Evans, Jeff R. Brook

Abstract:

Air quality monitoring stations tend to be widely distributed and are often located far from major roadways, thus, determining where, when, and which traffic-related air pollutants (TRAPs) have the greatest impact on public health becomes a matter of extrapolation. Compact, multipollutant sensor systems are an effective solution as they enable several TRAPs to be monitored in a geospatially dense network, thus filling in the gaps between conventional monitoring stations. This work describes two applications of one such system named AirSENCE for gathering actionable air quality data relevant to smart city infrastructures. In the first application, four AirSENCE devices were co-located with traffic monitors around the perimeter of a city block in Oshawa, Ontario. This study, which coincided with the COVID-19 outbreak of 2020 and subsequent lockdown measures, demonstrated a direct relationship between decreased traffic volumes and TRAP concentrations. Conversely, road construction was observed to cause elevated TRAP levels while reducing traffic volumes, illustrating that conventional smart city sensors such as traffic counters provide inadequate data for inferring air quality conditions. The second application used two AirSENCE sensors on opposite sides of a major 2-way commuter road in Toronto. Clear correlations of TRAP concentrations with wind direction were observed, which shows that impacted areas are not necessarily static and may exhibit high day-to-day variability in air quality conditions despite consistent traffic volumes. Both of these applications provide compelling evidence favouring the inclusion of air quality sensors in current and future smart city infrastructure planning. Such sensors provide direct measurements that are useful for public health alerting as well as decision-making for projects involving traffic mitigation, heavy construction, and urban renewal efforts.

Keywords: distributed sensor network, continuous ambient air quality monitoring, Smart city sensors, Internet of Things, traffic-related air pollutants

Procedia PDF Downloads 49
24352 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization

Authors: Yihao Kuang, Bowen Ding

Abstract:

With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graph and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improve strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain better and more efficient inference effect by introducing PPO into knowledge inference technology.

Keywords: reinforcement learning, PPO, knowledge inference, supervised learning

Procedia PDF Downloads 37
24351 Diversity in Finance Literature Revealed through the Lens of Machine Learning: A Topic Modeling Approach on Academic Papers

Authors: Oumaima Lahmar

Abstract:

This paper aims to define a structured topography for finance researchers seeking to navigate the body of knowledge in their extrapolation of finance phenomena. To make sense of the body of knowledge in finance, a probabilistic topic modeling approach is applied on 6000 abstracts of academic articles published in three top journals in finance between 1976 and 2020. This approach combines both machine learning techniques and natural language processing to statistically identify the conjunctions between research articles and their shared topics described each by relevant keywords. The topic modeling analysis reveals 35 coherent topics that can well depict finance literature and provide a comprehensive structure for the ongoing research themes. Comparing the extracted topics to the Journal of Economic Literature (JEL) classification system, a significant similarity was highlighted between the characterizing keywords. On the other hand, we identify other topics that do not match the JEL classification despite being relevant in the finance literature.

Keywords: finance literature, textual analysis, topic modeling, perplexity

Procedia PDF Downloads 139
24350 Applications of Big Data in Education

Authors: Faisal Kalota

Abstract:

Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners’ needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in education. Additionally, it discusses some of the concerns related to Big Data and current research trends. While Big Data can provide big benefits, it is important that institutions understand their own needs, infrastructure, resources, and limitation before jumping on the Big Data bandwagon.

Keywords: big data, learning analytics, analytics, big data in education, Hadoop

Procedia PDF Downloads 385
24349 Analysis, Evaluation and Optimization of Food Management: Minimization of Food Losses and Food Wastage along the Food Value Chain

Authors: G. Hafner

Abstract:

A method developed at the University of Stuttgart will be presented: ‘Analysis, Evaluation and Optimization of Food Management’. A major focus is represented by quantification of food losses and food waste as well as their classification and evaluation regarding a system optimization through waste prevention. For quantification and accounting of food, food losses and food waste along the food chain, a clear definition of core terms is required at the beginning. This includes their methodological classification and demarcation within sectors of the food value chain. The food chain is divided into agriculture, industry and crafts, trade and consumption (at home and out of home). For adjustment of core terms, the authors have cooperated with relevant stakeholders in Germany for achieving the goal of holistic and agreed definitions for the whole food chain. This includes modeling of sub systems within the food value chain, definition of terms, differentiation between food losses and food wastage as well as methodological approaches. ‘Food Losses’ and ‘Food Wastes’ are assigned to individual sectors of the food chain including a description of the respective methods. The method for analyzing, evaluation and optimization of food management systems consist of the following parts: Part I: Terms and Definitions. Part II: System Modeling. Part III: Procedure for Data Collection and Accounting Part. IV: Methodological Approaches for Classification and Evaluation of Results. Part V: Evaluation Parameters and Benchmarks. Part VI: Measures for Optimization. Part VII: Monitoring of Success The method will be demonstrated at the example of an invesigation of food losses and food wastage in the Federal State of Bavaria including an extrapolation of respective results to quantify food wastage in Germany.

Keywords: food losses, food waste, resource management, waste management, system analysis, waste minimization, resource efficiency

Procedia PDF Downloads 377
24348 Provenance and Paleoweathering Conditions of Doganhisar Clay Beds

Authors: Mehmet Yavuz Huseyinca

Abstract:

The clay beds are located at the south-southeast of Doğanhisar and northwest of Konya in the Central Anatolia. In the scope of preliminary study, three types of samples were investigated including basement phyllite (Bp) overlain by the clay beds, weathered phyllite (Wp) and Doğanhisar clay (Dc). The Chemical Index of Alteration (CIA) values of Dc range from 81 to 88 with an average of 85. This value is higher than that of Post Archean Australian Shale (PAAS) and defines very intense chemical weathering in the source-area. On the other hand, the A-CN-K diagram indicates that Bp underwent high degree post-depositional K-metasomatism. The average reconstructed CIA value of the Bp prior to the K-metasomatism is mainly 81 which overlaps the CIA values of the Wp (83) and Dc (85). Similar CIA values indicate parallel weathering trends. Also, extrapolation of the samples back to the plagioclase-alkali feldspar line in the A-CN-K diagram suggests an identical provenance close to granite in composition. Hereby the weathering background of Dc includes two steps. First one is intense weathering process of a granitic source to Bp with post-depositional K-metasomatism and the latter is progressively weathering of Bp to premetasomatised conditions (formation of Wp) ending with Dc deposition.

Keywords: clay beds, Doganhisar, provenance, weathering

Procedia PDF Downloads 286
24347 Prevalence of Dietary Supplements among University Athlete Regime in Sri Lanka: A Cross-Sectional Study

Authors: S. A. N. Rashani, S. Pigera, P. N. J. Fernando, S. Jayawickema, M. A. Niriella, A. P. De Silva

Abstract:

Dietary supplement (DS) consumption is drastically trending among the young athlete generation in developing countries. Many athletes try to fulfill their nutrition requirements using dietary supplements without knowing their effects on health and performance. This study aimed to assess the DS usage patterns of university athletes in Sri Lanka. A self-administered questionnaire was employed to collect data from state university students representing a university team, and a sample of 200 respondents was selected based on a stratified random sampling technique. Incomplete questionnaires were omitted from the analysis. The data were analyzed using IBM SPSS statistics for Windows version 25. The level of significance was set at p<0.05 in the data analysis. The prevalence of DS was 48.2% (n= 94), with no significant association between gender and DS intake. Protein (15.9%), vitamin (14.9%), sports drinks (12.8%), and creatine (8.2%) were the most consumed DS by students. Weightlifting (85.0%), football (62.5%), rugby (57.7%), and wrestling (40.9%) players showed higher DS usage among other sports. Coaches were reported as the most frequent person who was advised to use DS (43.0%). Students who won interuniversity games showed significantly low DS intake (p = 0.002) compared to others. Interestingly, DS use was significantly affected by the season of use (p = 0.000), pointing out that during competition and training seasons (62.4%) was the most frequent use. The pharmacy (27.0%) was the commonest place to buy DS. Students who used nutrient-dense meal plans during the training and competition period still showed a 61.0% tendency to consume DS. Most claimed reason to use DS was to increase energy and strength (29.0%). A majority reported that they used DS for less than one month (35.5%), while the second-highest duration was over three years (17.2%). Considering body mass index (BMI), healthy weight students showed 71.0% DS prevalence. DS prevalence was moderate among Sri Lankan university students, highlighting that the highest DS use was during competition and training seasons. Moreover, it emphasizes the need for nutrition and anti-doping counseling in the Sri Lankan university system.

Keywords: athlete, dietary, supplements, university

Procedia PDF Downloads 175
24346 Analysis of Big Data

Authors: Sandeep Sharma, Sarabjit Singh

Abstract:

As per the user demand and growth trends of large free data the storage solutions are now becoming more challenge-able to protect, store and to retrieve data. The days are not so far when the storage companies and organizations are start saying 'no' to store our valuable data or they will start charging a huge amount for its storage and protection. On the other hand as per the environmental conditions it becomes challenge-able to maintain and establish new data warehouses and data centers to protect global warming threats. A challenge of small data is over now, the challenges are big that how to manage the exponential growth of data. In this paper we have analyzed the growth trend of big data and its future implications. We have also focused on the impact of the unstructured data on various concerns and we have also suggested some possible remedies to streamline big data.

Keywords: big data, unstructured data, volume, variety, velocity

Procedia PDF Downloads 518
24345 Recognition of Tifinagh Characters with Missing Parts Using Neural Network

Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui

Abstract:

In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.

Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN

Procedia PDF Downloads 308
24344 Research of Data Cleaning Methods Based on Dependency Rules

Authors: Yang Bao, Shi Wei Deng, WangQun Lin

Abstract:

This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSQL), and gives 6 data cleaning methods based on these algorithms.

Keywords: data cleaning, dependency rules, violation data discovery, data repair

Procedia PDF Downloads 536
24343 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data

Procedia PDF Downloads 310
24342 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms

Authors: Arpine Maghakyan

Abstract:

The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.

Keywords: audit fees, auditor quality, digitalization, Big4

Procedia PDF Downloads 273
24341 Assessing the Quality of Clinical Photographs Taken for Orthodontic Patients at Queen’s Hospital, Romford

Authors: Maya Agarwala

Abstract:

Objectives: Audit the quality of clinical photographs taken for Orthodontic patients at Queen’s hospital, Romford. Design and setting: All Orthodontic photographs are taken in the Medical Photography Department at Queen’s Hospital. Retrospective audit with data collected between January - March 2023. Gold standard: Institute of Medical Illustrators (IMI) standard 12 photographs: 6 extraoral and 6 intraoral. 100% of patients to have the standard 12 photographs meeting a satisfactory diagnostic quality. Materials and methods: 30 patients randomly selected. All photographs analysed against the IMI gold standard. Results: A total of 360 photographs were analysed. 100% of the photographs had the 12 photographic views. Of which, 93.1% met the gold standard. Of the extraoral photos: 99.4% met the gold standard, 0.6% had incorrect head positioning. Of the intraoral photographs: 87.2% met the gold standard. The most common intraoral errors were: the presence of saliva pooling (7.2%), insufficient soft tissue retraction (3.3%), incomplete occlusal surface visibility (2.2%) and mirror fogging (1.1%). Conclusion: The gold standard was not met, however the overall standard of Orthodontic photographs is high. Further training of the Medical Photography team is needed to improve the quality of photographs. Following the training, the audit will be repeated. High-quality clinical photographs are an important part of clinical record keeping.

Keywords: orthodontics, paediatric, photography, audit

Procedia PDF Downloads 57
24340 Mining Big Data in Telecommunications Industry: Challenges, Techniques, and Revenue Opportunity

Authors: Hoda A. Abdel Hafez

Abstract:

Mining big data represents a big challenge nowadays. Many types of research are concerned with mining massive amounts of data and big data streams. Mining big data faces a lot of challenges including scalability, speed, heterogeneity, accuracy, provenance and privacy. In telecommunication industry, mining big data is like a mining for gold; it represents a big opportunity and maximizing the revenue streams in this industry. This paper discusses the characteristics of big data (volume, variety, velocity and veracity), data mining techniques and tools for handling very large data sets, mining big data in telecommunication and the benefits and opportunities gained from them.

Keywords: mining big data, big data, machine learning, telecommunication

Procedia PDF Downloads 373
24339 Validating Thermal Performance of Existing Wall Assemblies Using In-Situ Measurements

Authors: Shibei Huang

Abstract:

In deep energy retrofits, the thermal performance of existing building envelopes is often difficult to determine with a high level of accuracy. For older buildings, the records of existing assemblies are often incomplete or inaccurate. To obtain greater baseline performance accuracy for energy models, in-field measurement tools can be used to obtain data on the thermal performance of the existing assemblies. For a known assembly, these field measurements assist in validating the U-factor estimates. If the field-measured U-factor consistently varies from the calculated prediction, those measurements prompt further study. For an unknown assembly, successful field measurements can provide approximate U-factor evaluation, validate assumptions, or identify anomalies requiring further investigation. Using case studies, this presentation will focus on the non-destructive methods utilizing a set of various field tools to validate the baseline U-factors for a range of existing buildings with various wall assemblies. The lessons learned cover what can be achieved, the limitations of these approaches and tools, and ideas for improving the validity of measurements. Key factors include the weather conditions, the interior conditions, the thermal mass of the measured assemblies, and the thermal profiles of the assemblies in question.

Keywords: existing building, sensor, thermal analysis, retrofit

Procedia PDF Downloads 32
24338 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach

Authors: Theertha Chandroth

Abstract:

This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.

Keywords: XML, JSON, data comparison, integration testing, Python, SQL

Procedia PDF Downloads 98
24337 Effect of Al2O3 Nanoparticles on Corrosion Behavior of Aluminum Alloy Fabricated by Powder Metallurgy

Authors: Muna Khethier Abbass, Bassma Finner Sultan

Abstract:

In this research the effect of Al2O3 nanoparticles on corrosion behavior of aluminum base alloy(Al-4.5wt%Cu-1.5wt%Mg) has been investigated. Nanocomopsites reinforced with variable contents of 1,3 & 5wt% of Al2O3 nanoparticles were fabricated using powder metallurgy. All samples were prepared from the base alloy powders under the best powder metallurgy processing conditions of 6 hr of mixing time , 450 MPa of compaction pressure and 560°C of sintering temperature. Density and micro hardness measurements, and electrochemical corrosion tests are performed for all prepared samples in 3.5wt%NaCl solution at room temperature using potentiostate instrument. It has been found that density and micro hardness of the nanocomposite increase with increasing of wt% Al2O3 nanoparticles to Al matrix. It was found from Tafel extrapolation method that corrosion rates of the nanocomposites reinforced with alumina nanoparticles were lower than that of base alloy. From results of corrosion test by potentiodynamic cyclic polarization method, it was found the pitting corrosion resistance improves with adding of Al2O3 nanoparticles . It was noticed that the pits disappear and the hysteresis loop disappears also from anodic polarization curve.

Keywords: powder metallurgy, nano composites, Al-Cu-Mg alloy, electrochemical corrosion

Procedia PDF Downloads 443