Search results for: real time simulator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20562

Search results for: real time simulator

15072 Study of Ultrasonic Waves in Unidirectional Fiber-Reinforced Composite Plates for the Aerospace Applications

Authors: DucTho Le, Duy Kien Dao, Quoc Tinh Bui, Haidang Phan

Abstract:

The article is concerned with the motion of ultrasonic guided waves in a unidirectional fiber-reinforced composite plate under acoustic sources. Such unidirectional composite material has orthotropic elastic properties as it is very stiff along the fibers and rather compliant across the fibers. The dispersion equations of free Lamb waves propagating in an orthotropic layer are derived that results in the dispersion curves. The connection of these equations to the Rayleigh-Lamb frequency relations of isotropic plates is discussed. By the use of reciprocity in elastodynamics, closed-form solutions of elastic wave motions subjected to time-harmonic loads in the layer are computed in a simple manner. We also consider the problem of Lamb waves generated by a set of time-harmonic sources. The obtained computations can be very useful for developing ultrasound-based methods for nondestructive evaluation of composite structures.

Keywords: lamb waves, fiber-reinforced composite plates, dispersion equations, nondestructive evaluation, reciprocity theorems

Procedia PDF Downloads 139
15071 Energy Consumption Estimation for Hybrid Marine Power Systems: Comparing Modeling Methodologies

Authors: Kamyar Maleki Bagherabadi, Torstein Aarseth Bø, Truls Flatberg, Olve Mo

Abstract:

Hydrogen fuel cells and batteries are one of the promising solutions aligned with carbon emission reduction goals for the marine sector. However, the higher installation and operation costs of hydrogen-based systems compared to conventional diesel gensets raise questions about the appropriate hydrogen tank size, energy, and fuel consumption estimations. Ship designers need methodologies and tools to calculate energy and fuel consumption for different component sizes to facilitate decision-making regarding feasibility and performance for retrofits and design cases. The aim of this work is to compare three alternative modeling approaches for the estimation of energy and fuel consumption with various hydrogen tank sizes, battery capacities, and load-sharing strategies. A fishery vessel is selected as an example, using logged load demand data over a year of operations. The modeled power system consists of a PEM fuel cell, a diesel genset, and a battery. The methodologies used are: first, an energy-based model; second, considering load variations during the time domain with a rule-based Power Management System (PMS); and third, a load variations model and dynamic PMS strategy based on optimization with perfect foresight. The errors and potentials of the methods are discussed, and design sensitivity studies for this case are conducted. The results show that the energy-based method can estimate fuel and energy consumption with acceptable accuracy. However, models that consider time variation of the load provide more realistic estimations of energy and fuel consumption regarding hydrogen tank and battery size, still within low computational time.

Keywords: fuel cell, battery, hydrogen, hybrid power system, power management system

Procedia PDF Downloads 14
15070 A Periodogram-Based Spectral Method Approach: The Relationship between Tourism and Economic Growth in Turkey

Authors: Mesut BALIBEY, Serpil TÜRKYILMAZ

Abstract:

A popular topic in the econometrics and time series area is the cointegrating relationships among the components of a nonstationary time series. Engle and Granger’s least squares method and Johansen’s conditional maximum likelihood method are the most widely-used methods to determine the relationships among variables. Furthermore, a method proposed to test a unit root based on the periodogram ordinates has certain advantages over conventional tests. Periodograms can be calculated without any model specification and the exact distribution under the assumption of a unit root is obtained. For higher order processes the distribution remains the same asymptotically. In this study, in order to indicate advantages over conventional test of periodograms, we are going to examine a possible relationship between tourism and economic growth during the period 1999:01-2010:12 for Turkey by using periodogram method, Johansen’s conditional maximum likelihood method, Engle and Granger’s ordinary least square method.

Keywords: cointegration, economic growth, periodogram ordinate, tourism

Procedia PDF Downloads 256
15069 Speedup Breadth-First Search by Graph Ordering

Authors: Qiuyi Lyu, Bin Gong

Abstract:

Breadth-First Search(BFS) is a core graph algorithm that is widely used for graph analysis. As it is frequently used in many graph applications, improve the BFS performance is essential. In this paper, we present a graph ordering method that could reorder the graph nodes to achieve better data locality, thus, improving the BFS performance. Our method is based on an observation that the sibling relationships will dominate the cache access pattern during the BFS traversal. Therefore, we propose a frequency-based model to construct the graph order. First, we optimize the graph order according to the nodes’ visit frequency. Nodes with high visit frequency will be processed in priority. Second, we try to maximize the child nodes overlap layer by layer. As it is proved to be NP-hard, we propose a heuristic method that could greatly reduce the preprocessing overheads. We conduct extensive experiments on 16 real-world datasets. The result shows that our method could achieve comparable performance with the state-of-the-art methods while the graph ordering overheads are only about 1/15.

Keywords: breadth-first search, BFS, graph ordering, graph algorithm

Procedia PDF Downloads 125
15068 Preference Aggregation and Mechanism Design in the Smart Grid

Authors: Zaid Jamal Saeed Almahmoud

Abstract:

Smart Grid is the vision of the future power system that combines advanced monitoring and communication technologies to provide energy in a smart, efficient, and user-friendly manner. This proposal considers a demand response model in the Smart Grid based on utility maximization. Given a set of consumers with conflicting preferences in terms of consumption and a utility company that aims to minimize the peak demand and match demand to supply, we study the problem of aggregating these preferences while modelling the problem as a game. We also investigate whether an equilibrium can be reached to maximize the social benefit. Based on such equilibrium, we propose a dynamic pricing heuristic that computes the equilibrium and sets the prices accordingly. The developed approach was analysed theoretically and evaluated experimentally using real appliances data. The results show that our proposed approach achieves a substantial reduction in the overall energy consumption.

Keywords: heuristics, smart grid, aggregation, mechanism design, equilibrium

Procedia PDF Downloads 99
15067 Effect of Cellulase Pretreatment for n-Hexane Extraction of Oil from Garden Cress Seeds

Authors: Boutemak Khalida, Dahmani Siham

Abstract:

Garden cress (Lepidium Sativum L.) belonging to the family Brassicaceae, is edible growing annual herb. Its various parts (roots, leaves and seeds) have been used to treat various human ailments. Its seed extracts have been screened for various biological activities like hypotensive, antimicrobial, bronchodilator, hypoglycaemic and antianemic. The aim of the present study is to optimize the process parameters (cellulase concentration and incubation time) of enzymatic pre-treatment of the garden cress seeds and to evaluate the effect of cellulase pre-treatment of the crushed seeds on the oil yield, physico-chemical properties and antibacterial activity and comparing to non-enzymatic method. The optimum parameters of cellulase pre-treatment were as follows: cellulase of 0,1% w/w and incubation time of 2h. After enzymatic pre-treatment, the oil was extracted by n-hexane for 1.5 h, the oil yield was 4,01% for cellulase pre-treatment as against 10,99% in the control sample. The decrease in yield might be caused a result of mucilage. Garden cress seeds are covered with a layer of mucilage which gels on contact with water. At the same time, the antibacterial activity was carried out using agar diffusion method against 4 food-borne pathogens (Escherichia coli, Salmonella typhi,Staphylococcus aureus, Bacillus subtilis). The results showed that bacterial strains are very sensitive to the oil with cellulase pre-treatment. Staphylococcus aureus is extremely sensitive with the largest zone of inhibition (40 mm), Escherichia coli and salmonella typhi had a very sensitive to the oil with a zone of inhibition (26 mm). Bacillus subtilizes is averagely sensitive which gave an inhibition of 16 mm. But it does not exhibit sensivity to the oil without enzymatic pre-treatment with a zone inhibition (< 8 mm). Enzymatic pre-treatment could be useful for antimicrobial activity of the oil, and hold a good potential for use in food and pharmaceutical industries.

Keywords: Lepidium sativum L., cellulase, enzymatic pretreatment, antibacterial activity.

Procedia PDF Downloads 444
15066 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa

Abstract:

The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.

Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories

Procedia PDF Downloads 269
15065 Removal of Methylene Blue from Aqueous Solution by Adsorption onto Untreated Coffee Grounds

Authors: N. Azouaou, H. Mokaddem, D. Senadjki, K. Kedjit, Z. Sadaoui

Abstract:

Introduction: Water contamination caused by dye industries, including food, leather, textile, plastic, cosmetics, paper-making, printing and dye synthesis, has caused more and more attention, since most dyes are harmful to human being and environments. Untreated coffee grounds were used as a high-efficiency adsorbent for the removal of a cationic dye (methylene blue, MB) from aqueous solution. Characterization of the adsorbent was performed using several techniques such as SEM, surface area (BET), FTIR and pH zero charge. The effects of contact time, adsorbent dose, initial solution pH and initial concentration were systematically investigated. Results showed the adsorption kinetics followed the pseudo-second-order kinetic model. Langmuir isotherm model is in good agreement with the experimental data as compared to Freundlich and D–R models. The maximum adsorption capacity was found equal to 52.63mg/g. In addition, the possible adsorption mechanism was also proposed based on the experimental results. Experimental: The adsorption experiments were carried out in batch at room temperature. A given mass of adsorbent was added to methylene blue (MB) solution and the entirety was agitated during a certain time. The samples were carried out at quite time intervals. The concentrations of MB left in supernatant solutions after different time intervals were determined using a UV–vis spectrophotometer. The amount of MB adsorbed per unit mass of coffee grounds (qt) and the dye removal efficiency (R %) were evaluated. Results and Discussion: Some chemical and physical characteristics of coffee grounds are presented and the morphological analysis of the adsorbent was also studied. Conclusions: The good capacity of untreated coffee grounds to remove MB from aqueous solution was demonstrated in this study, highlighting its potential for effluent treatment processes. The kinetic experiments show that the adsorption is rapid and maximum adsorption capacities qmax= 52.63mg/g achieved in 30min. The adsorption process is a function of the adsorbent concentration, pH and metal ion concentration. The optimal parameters found are adsorbent dose m=5g, pH=5 and ambient temperature. FTIR spectra showed that the principal functional sites taking part in the sorption process included carboxyl and hydroxyl groups.

Keywords: adsorption, methylene blue, coffee grounds, kinetic study

Procedia PDF Downloads 215
15064 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 257
15063 Study of the Use of Artificial Neural Networks in Islamic Finance

Authors: Kaoutar Abbahaddou, Mohammed Salah Chiadmi

Abstract:

The need to find a relevant way to predict the next-day price of a stock index is a real concern for many financial stakeholders and researchers. We have known across years the proliferation of several methods. Nevertheless, among all these methods, the most controversial one is a machine learning algorithm that claims to be reliable, namely neural networks. Thus, the purpose of this article is to study the prediction power of neural networks in the particular case of Islamic finance as it is an under-looked area. In this article, we will first briefly present a review of the literature regarding neural networks and Islamic finance. Next, we present the architecture and principles of artificial neural networks most commonly used in finance. Then, we will show its empirical application on two Islamic stock indexes. The accuracy rate would be used to measure the performance of the algorithm in predicting the right price the next day. As a result, we can conclude that artificial neural networks are a reliable method to predict the next-day price for Islamic indices as it is claimed for conventional ones.

Keywords: Islamic finance, stock price prediction, artificial neural networks, machine learning

Procedia PDF Downloads 214
15062 Neural Network Based Approach of Software Maintenance Prediction for Laboratory Information System

Authors: Vuk M. Popovic, Dunja D. Popovic

Abstract:

Software maintenance phase is started once a software project has been developed and delivered. After that, any modification to it corresponds to maintenance. Software maintenance involves modifications to keep a software project usable in a changed or a changing environment, to correct discovered faults, and modifications, and to improve performance or maintainability. Software maintenance and management of software maintenance are recognized as two most important and most expensive processes in a life of a software product. This research is basing the prediction of maintenance, on risks and time evaluation, and using them as data sets for working with neural networks. The aim of this paper is to provide support to project maintenance managers. They will be able to pass the issues planned for the next software-service-patch to the experts, for risk and working time evaluation, and afterward to put all data to neural networks in order to get software maintenance prediction. This process will lead to the more accurate prediction of the working hours needed for the software-service-patch, which will eventually lead to better planning of budget for the software maintenance projects.

Keywords: laboratory information system, maintenance engineering, neural networks, software maintenance, software maintenance costs

Procedia PDF Downloads 337
15061 Prediction of California Bearing Ratio from Physical Properties of Fine-Grained Soils

Authors: Bao Thach Nguyen, Abbas Mohajerani

Abstract:

The California bearing ratio (CBR) has been acknowledged as an important parameter to characterize the bearing capacity of earth structures, such as earth dams, road embankments, airport runways, bridge abutments, and pavements. Technically, the CBR test can be carried out in the laboratory or in the field. The CBR test is time-consuming and is infrequently performed due to the equipment needed and the fact that the field moisture content keeps changing over time. Over the years, many correlations have been developed for the prediction of CBR by various researchers, including the dynamic cone penetrometer, undrained shear strength, and Clegg impact hammer. This paper reports and discusses some of the results from a study on the prediction of CBR. In the current study, the CBR test was performed in the laboratory on some fine-grained subgrade soils collected from various locations in Victoria. Based on the test results, a satisfactory empirical correlation was found between the CBR and the physical properties of the experimental soils.

Keywords: California bearing ratio, fine-grained soils, soil physical properties, pavement, soil test

Procedia PDF Downloads 494
15060 Vibration and Parametric Instability Analysis of Delaminated Composite Beams

Authors: A. Szekrényes

Abstract:

This paper revisits the free vibration problem of delaminated composite beams. It is shown that during the vibration of composite beams the delaminated parts are subjected to the parametric excitation. This can lead to the dynamic buckling during the motion of the structure. The equation of motion includes time-dependent stiffness and so it leads to a system of Mathieu-Hill differential equations. The free vibration analysis of beams is carried out in the usual way by using beam finite elements. The dynamic buckling problem is investigated locally, and the critical buckling forces are determined by the modified harmonic balance method by using an imposed time function of the motion. The stability diagrams are created, and the numerical predictions are compared to experimental results. The most important findings are the critical amplitudes at which delamination buckling takes place, the stability diagrams representing the instability of the system, and the realistic mode shape prediction in contrast with the unrealistic results of models available in the literature.

Keywords: delamination, free vibration, parametric excitation, sweep excitation

Procedia PDF Downloads 334
15059 Management of Femoral Neck Stress Fractures at a Specialist Centre and Predictive Factors to Return to Activity Time: An Audit

Authors: Charlotte K. Lee, Henrique R. N. Aguiar, Ralph Smith, James Baldock, Sam Botchey

Abstract:

Background: Femoral neck stress fractures (FNSF) are uncommon, making up 1 to 7.2% of stress fractures in healthy subjects. FNSFs are prevalent in young women, military recruits, endurance athletes, and individuals with energy deficiency syndrome or female athlete triad. Presentation is often non-specific and is often misdiagnosed following the initial examination. There is limited research addressing the return–to–activity time after FNSF. Previous studies have demonstrated prognostic time predictions based on various imaging techniques. Here, (1) OxSport clinic FNSF practice standards are retrospectively reviewed, (2) FNSF cohort demographics are examined, (3) Regression models were used to predict return–to–activity prognosis and consequently determine bone stress risk factors. Methods: Patients with a diagnosis of FNSF attending Oxsport clinic between 01/06/2020 and 01/01/2020 were selected from the Rheumatology Assessment Database Innovation in Oxford (RhADiOn) and OxSport Stress Fracture Database (n = 14). (1) Clinical practice was audited against five criteria based on local and National Institute for Health Care Excellence guidance, with a 100% standard. (2) Demographics of the FNSF cohort were examined with Student’s T-Test. (3) Lastly, linear regression and Random Forest regression models were used on this patient cohort to predict return–to–activity time. Consequently, an analysis of feature importance was conducted after fitting each model. Results: OxSport clinical practice met standard (100%) in 3/5 criteria. The criteria not met were patient waiting times and documentation of all bone stress risk factors. Importantly, analysis of patient demographics showed that of the population with complete bone stress risk factor assessments, 53% were positive for modifiable bone stress risk factors. Lastly, linear regression analysis was utilized to identify demographic factors that predicted return–to–activity time [R2 = 79.172%; average error 0.226]. This analysis identified four key variables that predicted return-to-activity time: vitamin D level, total hip DEXA T value, femoral neck DEXA T value, and history of an eating disorder/disordered eating. Furthermore, random forest regression models were employed for this task [R2 = 97.805%; average error 0.024]. Analysis of the importance of each feature again identified a set of 4 variables, 3 of which matched with the linear regression analysis (vitamin D level, total hip DEXA T value, and femoral neck DEXA T value) and the fourth: age. Conclusion: OxSport clinical practice could be improved by more comprehensively evaluating bone stress risk factors. The importance of this evaluation is demonstrated by the population found positive for these risk factors. Using this cohort, potential bone stress risk factors that significantly impacted return-to-activity prognosis were predicted using regression models.

Keywords: eating disorder, bone stress risk factor, femoral neck stress fracture, vitamin D

Procedia PDF Downloads 170
15058 Educational Institutional Approach for Livelihood Improvement and Sustainable Development

Authors: William Kerua

Abstract:

The PNG University of Technology (Unitech) has mandatory access to teaching, research and extension education. Given such function, the Agriculture Department has established the ‘South Pacific Institute of Sustainable Agriculture and Rural Development (SPISARD)’ in 2004. SPISARD is established as a vehicle to improve farming systems practiced in selected villages by undertaking pluralistic extension method through ‘Educational Institutional Approach’. Unlike other models, SPISARD’s educational institutional approach stresses on improving the whole farming systems practiced in a holistic manner and has a two-fold focus. The first is to understand the farming communities and improve the productivity of the farming systems in a sustainable way to increase income, improve nutrition and food security as well as livelihood enhancement trainings. The second is to enrich the Department’s curriculum through teaching, research, extension and getting inputs from farming community. SPISARD has established number of model villages in various provinces in Papua New Guinea (PNG) and with many positive outcome and success stories. Adaption of ‘educational institutional approach’ thus binds research, extension and training into one package with the use of students and academic staff through model village establishment in delivering development and extension to communities. This centre (SPISARD) coordinates the activities of the model village programs and linkages. The key to the development of the farming systems is establishing and coordinating linkages, collaboration, and developing partnerships both within and external institutions, organizations and agencies. SPISARD has a six-point step strategy for the development of sustainable agriculture and rural development. These steps are (i) establish contact and identify model villages, (ii) development of model village resource centres for research and trainings, (iii) conduct baseline surveys to identify problems/needs of model villages, (iv) development of solution strategies, (v) implementation and (vi) evaluation of impact of solution programs. SPISARD envisages that the farming systems practiced being improved if the villages can be made the centre of SPISARD activities. Therefore, SPISARD has developed a model village approach to channel rural development. The model village when established become the conduit points where teaching, training, research, and technology transfer takes place. This approach is again different and unique to the existing ones, in that, the development process take place in the farmers’ environment with immediate ‘real time’ feedback mechanisms based on the farmers’ perspective and satisfaction. So far, we have developed 14 model villages and have conducted 75 trainings in 21 different areas/topics in 8 provinces to a total of 2,832 participants of both sex. The aim of these trainings is to directly participate with farmers in the pursuit to improving their farming systems to increase productivity, income and to secure food security and nutrition, thus to improve their livelihood.

Keywords: development, educational institutional approach, livelihood improvement, sustainable agriculture

Procedia PDF Downloads 148
15057 Change Point Detection Using Random Matrix Theory with Application to Frailty in Elderly Individuals

Authors: Malika Kharouf, Aly Chkeir, Khac Tuan Huynh

Abstract:

Detecting change points in time series data is a challenging problem, especially in scenarios where there is limited prior knowledge regarding the data’s distribution and the nature of the transitions. We present a method designed for detecting changes in the covariance structure of high-dimensional time series data, where the number of variables closely matches the data length. Our objective is to achieve unbiased test statistic estimation under the null hypothesis. We delve into the utilization of Random Matrix Theory to analyze the behavior of our test statistic within a high-dimensional context. Specifically, we illustrate that our test statistic converges pointwise to a normal distribution under the null hypothesis. To assess the effectiveness of our proposed approach, we conduct evaluations on a simulated dataset. Furthermore, we employ our method to examine changes aimed at detecting frailty in the elderly.

Keywords: change point detection, hypothesis tests, random matrix theory, frailty in elderly

Procedia PDF Downloads 25
15056 Influence of High Hydrostatic Pressure Application (HHP) and Osmotic Dehydration (DO) as a Pretreatment to Hot –Air Drying of Abalone (Haliotis Rufescens) Cubes

Authors: Teresa Roco, Mario Perez Won, Roberto Lemus-Mondaca, Sebastian Pizarro

Abstract:

This research presents the simultaneous application of high hydrostatic pressure application (HHP) and osmotic dehydration (DO) as a pretreatment to hot –air drying of abalone cubes. The drying time was reduced to 6 hours at 60ºC as compared to the abalone drying by only a 15% NaCl osmotic pretreatment and at an atmospheric pressure that took 10 hours to dry at the same temperature. This was due to the salt and HHP saturation since osmotic pressure increases as water loss increases, thus needing a more reduced time in a convective drying, so water effective diffusion in drying plays an important role in this research. Different working conditions as pressure (350-550 MPa), pressure time ( 5-10 min), salt concentration, NaCl 15% and drying temperature (40-60ºC) will be optimized according to kinetic parameters of each mathematical model (Table 1). The models used for drying experimental curves were those corresponding to Weibull, Logarithmic and Midilli-Kucuk, but the latest one was the best fitted to the experimental data (Figure 1). The values for water effective diffusivity varied from 4.54 – to 9.95x10-9 m2/s for the 8 curves (DO+HHP) whereas the control samples (neither DO nor HHP) varied among 4.35 and 5.60x10-9 m2/s, for 40 and 60°C, respectively and as to drying by osmotic pretreatment at 15% NaCl from 3.804 to 4.36x10-9 m2/s at the same temperatures. Finally as to energy and efficiency consumption values for drying process (control and pretreated samples) it was found that they would be within a range of 777-1815 KJ/Kg and 8.22–19.20% respectively. Therefore, a knowledge concerning the drying kinetic as well as the consumption energy, in addition to knowledge about the quality of abalones subjected to an osmotic pretreatment (DO) and a high hydrostatic pressure (HHP) are extremely important to an industrial level so that the drying process can be successful at different pretreatment conditions and/or variable processes.

Keywords: abalone, convective drying, high pressure hydrostatic, pretreatments, diffusion coefficient

Procedia PDF Downloads 658
15055 Periodicity Analysis of Long-Term Waterquality Data Series of the Hungarian Section of the River Tisza Using Morlet Wavelet Spectrum Estimation

Authors: Péter Tanos, József Kovács, Angéla Anda, Gábor Várbíró, Sándor Molnár, István Gábor Hatvani

Abstract:

The River Tisza is the second largest river in Central Europe. In this study, Morlet wavelet spectrum (periodicity) analysis was used with chemical, biological and physical water quality data for the Hungarian section of the River Tisza. In the research 15, water quality parameters measured at 14 sampling sites in the River Tisza and 4 sampling sites in the main artificial changes were assessed for the time period 1993 - 2005. Results show that annual periodicity was not always to be found in the water quality parameters, at least at certain sampling sites. Periodicity was found to vary over space and time, but in general, an increase was observed in the company of higher trophic states of the river heading downstream.

Keywords: annual periodicity water quality, spatiotemporal variability of periodic behavior, Morlet wavelet spectrum analysis, River Tisza

Procedia PDF Downloads 332
15054 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 31
15053 Topic-to-Essay Generation with Event Element Constraints

Authors: Yufen Qin

Abstract:

Topic-to-Essay generation is a challenging task in Natural language processing, which aims to generate novel, diverse, and topic-related text based on user input. Previous research has overlooked the generation of articles under the constraints of event elements, resulting in issues such as incomplete event elements and logical inconsistencies in the generated results. To fill this gap, this paper proposes an event-constrained approach for a topic-to-essay generation that enforces the completeness of event elements during the generation process. Additionally, a language model is employed to verify the logical consistency of the generated results. Experimental results demonstrate that the proposed model achieves a better BLEU-2 score and performs better than the baseline in terms of subjective evaluation on a real dataset, indicating its capability to generate higher-quality topic-related text.

Keywords: event element, language model, natural language processing, topic-to-essay generation.

Procedia PDF Downloads 218
15052 Poland and the Dawn of the Right to Education and Development: Moving Back in Time

Authors: Magdalena Zabrocka

Abstract:

The terror of women throughout the governance of the current populist ruling party in Poland, PiS, has been a subject of a heated debate alongside the issues of minorities’ rights, the rule of law, and democracy in the country. The challenges that women and other vulnerable groups are currently facing, however, come down to more than just a lack of comprehensive equality laws, severely limited reproductive rights, hateful slogans, and messages propagated by the central authority and its sympathisers, or a common disregard for women’s fundamental rights. Many sources and media reports are available only in Polish, while international rapporteurs fail to acknowledge the whole picture of the tragedy happening in the country and the variety of factors affecting it. Starting with the authorities’ and Polish catholic church’s propaganda concerning CEDAW and the Istanbul Convention Action against Violence against Women and Domestic Violence by spreading strategic disinformation that it codifies ‘gender ideology’ and ‘anti-Christian values’ in order to convince the electorate that the legal instruments should be ‘abandoned’. Alongside severely restricted abortion rights, bullying medical professionals helping women exercise their reproductive rights, violating women’s privacy by introducing a mandatory registry of pregnancies (so that one’s pregnancy or its ‘loss’ can be tracked and traced), restricting access to the ‘day after pill’ and real sex education at schools (most schools have a subject of ‘knowledge of living in a family’), introducing prison punishment for teachers accused of spreading ‘sex education’, and many other, the current tyrant government, has now decided to target the youngest with its misinformation and indoctrination, via strategically designed textbooks and curriculum. Biology books have seen a big restriction on the size of the chapters devoted to evolution, reproductive system, and sexual health. Approved religion books (which are taught 2-3 times a week as compared to 1 a week sciences) now cover false information about Darwin’s theory and arguments ‘against it’. Most recently, however, the public spoke up against the absurd messages contained in the politically rewritten history books, where the material about some figures not liked by the governing party has already been manipulated. In the recently approved changes to the history textbook, one can find a variety of strongly biased and politically-charged views representative of the conservatives in the states, most notably, equating the ‘gender ideology’ and feminism with Nazism. Thus, this work, by employing a human rights approach, would focus on the right to education and development as well as the considerate obstacles to access to scientific information by the youth.

Keywords: Poland, right to education, right to development, authoritarianism, access to information

Procedia PDF Downloads 88
15051 Effects of Roasting as Preservative Method on Food Value of the Runner Groundnuts, Arachis hypogaea

Authors: M. Y. Maila, H. P. Makhubele

Abstract:

Roasting is one of the oldest preservation method used in foods such as nuts and seeds. It is a process by which heat is applied to dry foodstuffs without the use of oil or water as a carrier. Groundnut seeds, also known as peanuts when sun dried or roasted, are among the oldest oil crops that are mostly consumed as a snack, after roasting in many parts of South Africa. However, roasting can denature proteins, destroy amino acids, decrease nutritive value and induce undesirable chemical changes in the final product. The aim of this study, therefore, was to evaluate the effect of various roasting times on the food value of the runner groundnut seeds. A constant temperature of 160 °C and various time-intervals (20, 30, 40, 50 and 60 min) were used for roasting groundnut seeds in an oven. Roasted groundnut seeds were then cooled and milled to flour. The milled sundried, raw groundnuts served as reference. The proximate analysis (moisture, energy and crude fats) was performed and the results were determined using standard methods. The antioxidant content was determined using HPLC. Mineral (cobalt, chromium, silicon and iron) contents were determined by first digesting the ash of sundried and roasted seed samples in 3M Hydrochloric acid and then determined by Atomic Absorption Spectrometry. All results were subjected to ANOVA through SAS software. Relative to the reference, roasting time significantly (p ≤ 0.05) reduced moisture (71%–88%), energy (74%) and crude fat (5%–64%) of the runner groundnut seeds, whereas the antioxidant content was significantly (p ≤ 0.05) increased (35%–72%) with increasing roasting time. Similarly, the tested mineral contents of the roasted runner groundnut seeds were also significantly (p ≤ 0.05) reduced at all roasting times: cobalt (21%–83%), chromium (48%–106%) and silicon (58%–77%). However, the iron content was significantly (p ≤ 0.05) unaffected. Generally, the tested runner groundnut seeds had higher food value in the raw state than in the roasted state, except for the antioxidant content. Moisture is a critical factor affecting the shelf life, texture and flavor of the final product. Loss of moisture ensures prolonged shelf life, which contribute to the stability of the roasted peanuts. Also, increased antioxidant content in roasted groundnuts is essential in other health-promoting compounds. In conclusion, the overall reduction in the proximate and mineral contents of the runner groundnuts seeds due to roasting is sufficient to suggest influences of roasting time on the food value of the final product and shelf life.

Keywords: dry roasting, legume, oil source, peanuts

Procedia PDF Downloads 271
15050 Verification of Low-Dose Diagnostic X-Ray as a Tool for Relating Vital Internal Organ Structures to External Body Armour Coverage

Authors: Natalie A. Sterk, Bernard van Vuuren, Petrie Marais, Bongani Mthombeni

Abstract:

Injuries to the internal structures of the thorax and abdomen remain a leading cause of death among soldiers. Body armour is a standard issue piece of military equipment designed to protect the vital organs against ballistic and stab threats. When configured for maximum protection, the excessive weight and size of the armour may limit soldier mobility and increase physical fatigue and discomfort. Providing soldiers with more armour than necessary may, therefore, hinder their ability to react rapidly in life-threatening situations. The capability to determine the optimal trade-off between the amount of essential anatomical coverage and hindrance on soldier performance may significantly enhance the design of armour systems. The current study aimed to develop and pilot a methodology for relating internal anatomical structures with actual armour plate coverage in real-time using low-dose diagnostic X-ray scanning. Several pilot scanning sessions were held at Lodox Systems (Pty) Ltd head-office in South Africa. Testing involved using the Lodox eXero-dr to scan dummy trunk rigs at various degrees and heights of measurement; as well as human participants, wearing correctly fitted body armour while positioned in supine, prone shooting, seated and kneeling shooting postures. The verification of sizing and metrics obtained from the Lodox eXero-dr were then confirmed through a verification board with known dimensions. Results indicated that the low-dose diagnostic X-ray has the capability to clearly identify the vital internal structures of the aortic arch, heart, and lungs in relation to the position of the external armour plates. Further testing is still required in order to fully and accurately identify the inferior liver boundary, inferior vena cava, and spleen. The scans produced in the supine, prone, and seated postures provided superior image quality over the kneeling posture. The X-ray-source and-detector distance from the object must be standardised to control for possible magnification changes and for comparison purposes. To account for this, specific scanning heights and angles were identified to allow for parallel scanning of relevant areas. The low-dose diagnostic X-ray provides a non-invasive, safe, and rapid technique for relating vital internal structures with external structures. This capability can be used for the re-evaluation of anatomical coverage required for essential protection while optimising armour design and fit for soldier performance.

Keywords: body armour, low-dose diagnostic X-ray, scanning, vital organ coverage

Procedia PDF Downloads 112
15049 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications

Authors: Alissa Lefebre

Abstract:

It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.

Keywords: divisional applications, regulatory changes, strategic patenting, EPO

Procedia PDF Downloads 116
15048 Four-dimensional (4D) Decoding Information Presented in Reports of Project Progress in Developing Countries

Authors: Vahid Khadjeh Anvary, Hamideh Karimi Yazdi

Abstract:

Generally, the tool of comparison between performance of each stage in the life of a project, is the number of project progress during that period, which in most cases is only determined as one-dimensional with referring to one of three factors (physical, time, and financial). In many projects in developing countries there are controversies on accuracy and the way of analyzing progress report of projects that hinders getting definitive and engineering conclusions on the status of project.Identifying weakness points of this kind of one-dimensional look on project and determining a reliable and engineering approach for multi-dimensional decoding information receivable from project is of great importance in project management.This can be a tool to help identification of hidden diseases of project before appearing irreversible symptoms that are usually delays or increased costs of execution. The method used in this paper is defining and evaluating a hypothetical project as an example analyzing different scenarios and numerical comparison of them along with related graphs and tables. Finally, by analyzing different possible scenarios in the project, possibility or impossibility of predicting their occurrence is examine through the evidence.

Keywords: physical progress, time progress, financial progress, delays, critical path

Procedia PDF Downloads 363
15047 Beijing Xicheng District Housing Price Econometric Analysis: “Multi-School Zoning”Policy

Authors: Haoxue Cui, Sirui Zhang, Shanshan Gao, Weiyi Zhang, Lantian Wang, Xuanwen Zheng

Abstract:

The 2020 "multi-school zoning" policy makes students ineligible for direct attendance in their district. To study whether the housing price trend of the school district is affected by the policy, This paper studies housing prices based on the school district division in Xicheng District, Beijing. In this paper, we collected housing prices and the basic situation of communities from "Anjuke", which were divided into two periods of 15 months before and after the 731 policy in the Xicheng District, Beijing. Then we used DID model and time fixed effect to investigate the DIFFERENTIAL statistics, that is, the overall net impact of the policy. The results show that the coefficient is negative at a certain statistical level. It indicates that the housing prices of school districts in the Xicheng district decreased after the "multi-school zoning" policy, which shows that the policy has effectively reduced the housing price of school districts in the Xicheng District and laid a foundation for the "double reduction" policy in 2022.

Keywords: “multi-school zoning”policy, DID, time fixed effect, housing prices

Procedia PDF Downloads 138
15046 A Resource Optimization Strategy for CPU (Central Processing Unit) Intensive Applications

Authors: Junjie Peng, Jinbao Chen, Shuai Kong, Danxu Liu

Abstract:

On the basis of traditional resource allocation strategies, the usage of resources on physical servers in cloud data center is great uncertain. It will cause waste of resources if the assignment of tasks is not enough. On the contrary, it will cause overload if the assignment of tasks is too much. This is especially obvious when the applications are the same type because of its resource preferences. Considering CPU intensive application is one of the most common types of application in the cloud, we studied the optimization strategy for CPU intensive applications on the same server. We used resource preferences to analyze the case that multiple CPU intensive applications run simultaneously, and put forward a model which can predict the execution time for CPU intensive applications which run simultaneously. Based on the prediction model, we proposed the method to select the appropriate number of applications for a machine. Experiments show that the model can predict the execution time accurately for CPU intensive applications. To improve the execution efficiency of applications, we propose a scheduling model based on priority for CPU intensive applications. Extensive experiments verify the validity of the scheduling model.

Keywords: cloud computing, CPU intensive applications, resource optimization, strategy

Procedia PDF Downloads 268
15045 The Usage of Nitrogen Gas and Alum for Sludge Dewatering

Authors: Mamdouh Yousef Saleh, Medhat Hosny El-Zahar, Shymaa El-Dosoky

Abstract:

In most cases, the associated processing cost of dewatering sludge increase with the solid particles concentration. All experiments in this study were conducted on biological sludge type. All experiments help to reduce the greenhouse gases in addition, the technology used was faster in time and less in cost compared to other methods. First, the bubbling pressure was used to dissolve N₂ gas into the sludge, second alum was added to accelerate the process of coagulation of the sludge particles and facilitate their flotation, and third nitrogen gas was used to help floating the sludge particles and reduce the processing time because of the nitrogen gas from the inert gases. The conclusions of this experiment were as follows: first, the best conditions were obtained when the bubbling pressure was 0.6 bar. Second, the best alum dose was determined to help the sludge agglomerate and float. During the experiment, the best alum dose was 80 mg/L. It increased concentration of the sludge by 7-8 times. Third, the economic dose of nitrogen gas was 60 mg/L with separation efficiency of 85%. The sludge concentration was about 8-9 times. That happened due to the gas released tiny bubbles which adhere to the suspended matter causing them to float to the surface of the water where it could be then removed.

Keywords: nitrogen gas, biological treatment, alum, dewatering sludge, greenhouse gases

Procedia PDF Downloads 196
15044 Metaphysics of the Unified Field of the Universe

Authors: Santosh Kaware, Dnyandeo Patil, Moninder Modgil, Hemant Bhoir, Debendra Behera

Abstract:

The Unified Field Theory has been an area of intensive research since many decades. This paper focuses on philosophy and metaphysics of unified field theory at Planck scale - and its relationship with super string theory and Quantum Vacuum Dynamic Physics. We examined the epistemology of questions such as - (1) what is the Unified Field of universe? (2) can it actually - (a) permeate the complete universe - or (b) be localized in bound regions of the universe - or, (c) extend into the extra dimensions? - -or (d) live only in extra dimensions? (3) What should be the emergent ontological properties of Unified field? (4) How the universe is manifesting through its Quantum Vacuum energies? (5) How is the space time metric coupled to the Unified field? We present a number of ansatz - which we outline below. It is proposed that the unified field possesses consciousness as well as a memory - a recording of past history - analogous to ‘Consistent Histories’ interpretation of quantum mechanics. We proposed Planck scale geometry of Unified Field with circle like topology and having 32 energy points on its periphery which are the connected to each other by 10 dimensional meta-strings which are sources for manifestation of different fundamentals forces and particles of universe through its Quantum Vacuum energies. It is also proposed that the sub energy levels of ‘Conscious Unified Field’ are used for the process of creation, preservation and rejuvenation of the universe over a period of time by means of negentropy. These epochs can be for the complete universe, or for localized regions such as galaxies or cluster of galaxies. It is proposed that Unified field operates through geometric patterns of its Quantum Vacuum energies - manifesting as various elementary particles by giving spins to zero point energy elements. Epistemological relationship between unified field theory and super-string theories is examined. Properties of ‘consciousness’ and 'memory' cascades from universe, into macroscopic objects - and further onto the elementary particles - via a fractal pattern. Other properties of fundamental particles - such as mass, charge, spin, iso-spin also spill out of such a cascade. The manifestations of the unified field can reach into the parallel universes or the ‘multi-verse’ and essentially have an existence independent of the space-time. It is proposed that mass, length, time scales of the unified theory are less than even the Planck scale - and can be called at a level which we call that of 'Super Quantum Gravity (SQG)'.

Keywords: super string theory, Planck scale geometry, negentropy, super quantum gravity

Procedia PDF Downloads 259
15043 Managing the Cloud Procurement Process: Findings from a Case Study

Authors: Andreas Jede, Frank Teuteberg

Abstract:

Cloud computing (CC) has already gained overall appreciation in research and practice. Whereas the willingness to integrate cloud services in various IT environments is still unbroken, the previous CC procurement processes run mostly in an unorganized and non-standardized way. In practice, a sufficiently specific, yet applicable business process for the important acquisition phase is often lacking. And research does not appropriately remedy this deficiency yet. Therefore, this paper introduces a field-tested approach for CC procurement. Based on an extensive literature review and augmented by expert interviews, we designed a model that is validated and further refined through an in-depth real-life case study. For the detailed process description, we apply the event-driven process chain notation (EPC). The gained valuable insights into the case study may help CC research to shift to a more socio-technical area. For practice, next to giving useful organizational instructions we will provide extended checklists and lessons learned.

Keywords: cloud procurement process, IT-organization, event-driven process chain, in-depth case study

Procedia PDF Downloads 383