Search results for: cloud service models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10253

Search results for: cloud service models

5093 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties

Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi

Abstract:

Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.

Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling

Procedia PDF Downloads 58
5092 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N

Authors: Oindrila Nath, S. Sridharan

Abstract:

Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.

Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature

Procedia PDF Downloads 398
5091 Fragility Assessment for Torsionally Asymmetric Buildings in Plan

Authors: S. Feli, S. Tavousi Tafreshi, A. Ghasemi

Abstract:

The present paper aims at evaluating the response of three-dimensional buildings with in-plan stiffness irregularities that have been subjected to two-way excitation ground motion records simultaneously. This study is broadly-based fragility assessment with greater emphasis on structural response at in-plan flexible and stiff sides. To this end, three type of three-dimensional 5-story steel building structures with stiffness eccentricities, were subjected to extensive nonlinear incremental dynamic analyses (IDA) utilizing Ibarra-Krawinkler deterioration models. Fragility assessment was implemented for different configurations of braces to investigate the losses in buildings with center of resisting (CR) eccentricities.

Keywords: Ibarra-Krawinkler, fragility assessment, flexible and stiff side, center of resisting

Procedia PDF Downloads 192
5090 Real Activities Manipulation vs. Accrual Earnings Management: The Effect of Political Risk

Authors: Heba Abdelmotaal, Magdy Abdel-Kader

Abstract:

Purpose: This study explores whether a firm’s effective political risk management is preventing real and accrual earnings management . Design/methodology/approach: Based on a sample of 130 firms operating in Egypt during the period 2008-2013, two hypotheses are tested using the panel data regression models. Findings: The empirical findings indicate a significant relation between real and accrual earnings management and political risk. Originality/value: This paper provides a statistically evidence on the effects of the political risk management failure on the mangers’ engagement in the real and accrual earnings management practices, and its impact on the firm’s performance.

Keywords: political risk, risk management failure, real activities manipulation, accrual earnings management

Procedia PDF Downloads 421
5089 The Tariffs of Water Service for Productive Users: A Model for Defining Fare Classes

Authors: M. Macchiaroli, V. Pellecchia, L. Dolores

Abstract:

The water supply for production users (craft, commercial, industrial), understood as the set of water supply and wastewater collection services becomes an increasingly felt problem in a water scarcity regime. In fact, disputes are triggered between the different social parties for the fair and efficient use of water resources. Within this aspect, the problem arises of the different pricing of services between civil users and production users. Of particular interest is the question of defining the tariff classes depending on consumption levels. If for civil users, this theme is strongly permeated by social profiles (a topic dealt with by the author in a forthcoming research contribution) connected with the inalienability of the right to have water and with the reconciliation of the needs of the weakest groups of the population, for consumers in the production sector the logic adopted by the manager may be inspired by criteria of greater corporate rationality. This work illustrates the Italian regulatory framework and shows an optimization model of tariff classes in the production sector that reconciles the public objective of sustainable use of the resource and the needs of a production system in search of recovery after the depressing effects caused by COVID-19 pandemic.

Keywords: decision making, economic evaluation, urban water management, water tariff

Procedia PDF Downloads 101
5088 Studies on the Applicability of Artificial Neural Network (ANN) in Prediction of Thermodynamic Behavior of Sodium Chloride Aqueous System Containing a Non-Electrolytes

Authors: Dariush Jafari, S. Mostafa Nowee

Abstract:

In this study a ternary system containing sodium chloride as solute, water as primary solvent and ethanol as the antisolvent was considered to investigate the application of artificial neural network (ANN) in prediction of sodium solubility in the mixture of water as the solvent and ethanol as the antisolvent. The system was previously studied using by Extended UNIQUAC model by the authors of this study. The comparison between the results of the two models shows an excellent agreement between them (R2=0.99), and also approves the capability of ANN to predict the thermodynamic behavior of ternary electrolyte systems which are difficult to model.

Keywords: thermodynamic modeling, ANN, solubility, ternary electrolyte system

Procedia PDF Downloads 375
5087 Predicting Medical Check-Up Patient Re-Coming Using Sequential Pattern Mining and Association Rules

Authors: Rizka Aisha Rahmi Hariadi, Chao Ou-Yang, Han-Cheng Wang, Rajesri Govindaraju

Abstract:

As the increasing of medical check-up popularity, there are a huge number of medical check-up data stored in database and have not been useful. These data actually can be very useful for future strategic planning if we mine it correctly. In other side, a lot of patients come with unpredictable coming and also limited available facilities make medical check-up service offered by hospital not maximal. To solve that problem, this study used those medical check-up data to predict patient re-coming. Sequential pattern mining (SPM) and association rules method were chosen because these methods are suitable for predicting patient re-coming using sequential data. First, based on patient personal information the data was grouped into … groups then discriminant analysis was done to check significant of the grouping. Second, for each group some frequent patterns were generated using SPM method. Third, based on frequent patterns of each group, pairs of variable can be extracted using association rules to get general pattern of re-coming patient. Last, discussion and conclusion was done to give some implications of the results.

Keywords: patient re-coming, medical check-up, health examination, data mining, sequential pattern mining, association rules, discriminant analysis

Procedia PDF Downloads 625
5086 Lee-Carter Mortality Forecasting Method with Dynamic Normal Inverse Gaussian Mortality Index

Authors: Funda Kul, İsmail Gür

Abstract:

Pension scheme providers have to price mortality risk by accurate mortality forecasting method. There are many mortality-forecasting methods constructed and used in literature. The Lee-Carter model is the first model to consider stochastic improvement trends in life expectancy. It is still precisely used. Mortality forecasting is done by mortality index in the Lee-Carter model. It is assumed that mortality index fits ARIMA time series model. In this paper, we propose and use dynamic normal inverse gaussian distribution to modeling mortality indes in the Lee-Carter model. Using population mortality data for Italy, France, and Turkey, the model is forecasting capability is investigated, and a comparative analysis with other models is ensured by some well-known benchmarking criterions.

Keywords: mortality, forecasting, lee-carter model, normal inverse gaussian distribution

Procedia PDF Downloads 344
5085 Tumor Detection Using Convolutional Neural Networks (CNN) Based Neural Network

Authors: Vinai K. Singh

Abstract:

In Neural Network-based Learning techniques, there are several models of Convolutional Networks. Whenever the methods are deployed with large datasets, only then can their applicability and appropriateness be determined. Clinical and pathological pictures of lobular carcinoma are thought to exhibit a large number of random formations and textures. Working with such pictures is a difficult problem in machine learning. Focusing on wet laboratories and following the outcomes, numerous studies have been published with fresh commentaries in the investigation. In this research, we provide a framework that can operate effectively on raw photos of various resolutions while easing the issues caused by the existence of patterns and texturing. The suggested approach produces very good findings that may be used to make decisions in the diagnosis of cancer.

Keywords: lobular carcinoma, convolutional neural networks (CNN), deep learning, histopathological imagery scans

Procedia PDF Downloads 122
5084 Artificial Intelligence Based Comparative Analysis for Supplier Selection in Multi-Echelon Automotive Supply Chains via GEP and ANN Models

Authors: Seyed Esmail Seyedi Bariran, Laysheng Ewe, Amy Ling

Abstract:

Since supplier selection appears as a vital decision, selecting supplier based on the best and most accurate ways has a lot of importance for enterprises. In this study, a new Artificial Intelligence approach is exerted to remove weaknesses of supplier selection. The paper has three parts. First part is choosing the appropriate criteria for assessing the suppliers’ performance. Next one is collecting the data set based on experts. Afterwards, the data set is divided into two parts, the training data set and the testing data set. By the training data set the best structure of GEP and ANN are selected and to evaluate the power of the mentioned methods the testing data set is used. The result obtained shows that the accuracy of GEP is more than ANN. Moreover, unlike ANN, a mathematical equation is presented by GEP for the supplier selection.

Keywords: supplier selection, automotive supply chains, ANN, GEP

Procedia PDF Downloads 610
5083 A Genetic-Neural-Network Modeling Approach for Self-Heating in GaN High Electron Mobility Transistors

Authors: Anwar Jarndal

Abstract:

In this paper, a genetic-neural-network (GNN) based large-signal model for GaN HEMTs is presented along with its parameters extraction procedure. The model is easy to construct and implement in CAD software and requires only DC and S-parameter measurements. An improved decomposition technique is used to model self-heating effect. Two GNN models are constructed to simulate isothermal drain current and power dissipation, respectively. The two model are then composed to simulate the drain current. The modeling procedure was applied to a packaged GaN-on-Si HEMT and the developed model is validated by comparing its large-signal simulation with measured data. A very good agreement between the simulation and measurement is obtained.

Keywords: GaN HEMT, computer-aided design and modeling, neural networks, genetic optimization

Procedia PDF Downloads 366
5082 A Blind Three-Dimensional Meshes Watermarking Using the Interquartile Range

Authors: Emad E. Abdallah, Alaa E. Abdallah, Bajes Y. Alskarnah

Abstract:

We introduce a robust three-dimensional watermarking algorithm for copyright protection and indexing. The basic idea behind our technique is to measure the interquartile range or the spread of the 3D model vertices. The algorithm starts by converting all the vertices to spherical coordinate followed by partitioning them into small groups. The proposed algorithm is slightly altering the interquartile range distribution of the small groups based on predefined watermark. The experimental results on several 3D meshes prove perceptual invisibility and the robustness of the proposed technique against the most common attacks including compression, noise, smoothing, scaling, rotation as well as combinations of these attacks.

Keywords: watermarking, three-dimensional models, perceptual invisibility, interquartile range, 3D attacks

Procedia PDF Downloads 460
5081 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 219
5080 Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem

Authors: Bidzina Matsaberidze, Anna Sikharulidze, Gia Sirbiladze, Bezhan Ghvaberidze

Abstract:

In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.

Keywords: epsilon-constraint method, estimation of distribution algorithm, fuzzy multi-objective combinatorial programming problem, fuzzy multi-objective emergency location/transportation problem

Procedia PDF Downloads 304
5079 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 60
5078 Optimization of Electric Vehicle (EV) Charging Station Allocation Based on Multiple Data - Taking Nanjing (China) as an Example

Authors: Yue Huang, Yiheng Feng

Abstract:

Due to the global pressure on climate and energy, many countries are vigorously promoting electric vehicles and building charging (public) charging facilities. Faced with the supply-demand gap of existing electric vehicle charging stations and unreasonable space usage in China, this paper takes the central city of Nanjing as an example, establishes a site selection model through multivariate data integration, conducts multiple linear regression SPSS analysis, gives quantitative site selection results, and provides optimization models and suggestions for charging station layout planning.

Keywords: electric vehicle, charging station, allocation optimization, urban mobility, urban infrastructure, nanjing

Procedia PDF Downloads 77
5077 Analysis of Organizational Factors Effect on Performing Electronic Commerce Strategy: A Case Study of the Namakin Food Industry

Authors: Seyed Hamidreza Hejazi Dehghani, Neda Khounsari

Abstract:

Quick growth of electronic commerce in developed countries means that developing nations must change in their commerce strategies fundamentally. Most organizations are aware of the impact of the Internet and e-Commerce on the future of their firm, and thus, they have to focus on organizational factors that have an effect on the deployment of an e-Commerce strategy. In this situation, it is essential to identify organizational factors such as the organizational culture, human resources, size, structure and product/service that impact an e-commerce strategy. Accordingly, this research specifies the effects of organizational factors on applying an e-commerce strategy in the Namakin food industry. The statistical population of this research is 95 managers and employees. Cochran's formula is used for determination of the sample size that is 77 of the statistical population. Also, SPSS and Smart PLS software were utilized for analyzing the collected data. The results of hypothesis testing show that organizational factors have positive and significant effects of applying an e-Commerce strategy. On the other hand, sub-hypothesizes show that effectiveness of the organizational culture and size criteria were rejected and other sub-hypothesis were accepted.

Keywords: electronic commerce, organizational factors, attitude of managers, organizational readiness

Procedia PDF Downloads 266
5076 Geostatistical Analysis of Contamination of Soils in an Urban Area in Ghana

Authors: S. K. Appiah, E. N. Aidoo, D. Asamoah Owusu, M. W. Nuonabuor

Abstract:

Urbanization remains one of the unique predominant factors which is linked to the destruction of urban environment and its associated cases of soil contamination by heavy metals through the natural and anthropogenic activities. These activities are important sources of toxic heavy metals such as arsenic (As), cadmium (Cd), chromium (Cr), copper (Cu), iron (Fe), manganese (Mn), and lead (Pb), nickel (Ni) and zinc (Zn). Often, these heavy metals lead to increased levels in some areas due to the impact of atmospheric deposition caused by their proximity to industrial plants or the indiscriminately burning of substances. Information gathered on potentially hazardous levels of these heavy metals in soils leads to establish serious health and urban agriculture implications. However, characterization of spatial variations of soil contamination by heavy metals in Ghana is limited. Kumasi is a Metropolitan city in Ghana, West Africa and is challenged with the recent spate of deteriorating soil quality due to rapid economic development and other human activities such as “Galamsey”, illegal mining operations within the metropolis. The paper seeks to use both univariate and multivariate geostatistical techniques to assess the spatial distribution of heavy metals in soils and the potential risk associated with ingestion of sources of soil contamination in the Metropolis. Geostatistical tools have the ability to detect changes in correlation structure and how a good knowledge of the study area can help to explain the different scales of variation detected. To achieve this task, point referenced data on heavy metals measured from topsoil samples in a previous study, were collected at various locations. Linear models of regionalisation and coregionalisation were fitted to all experimental semivariograms to describe the spatial dependence between the topsoil heavy metals at different spatial scales, which led to ordinary kriging and cokriging at unsampled locations and production of risk maps of soil contamination by these heavy metals. Results obtained from both the univariate and multivariate semivariogram models showed strong spatial dependence with range of autocorrelations ranging from 100 to 300 meters. The risk maps produced show strong spatial heterogeneity for almost all the soil heavy metals with extremely risk of contamination found close to areas with commercial and industrial activities. Hence, ongoing pollution interventions should be geared towards these highly risk areas for efficient management of soil contamination to avert further pollution in the metropolis.

Keywords: coregionalization, heavy metals, multivariate geostatistical analysis, soil contamination, spatial distribution

Procedia PDF Downloads 285
5075 Modelling of Pervaporation Separation of Butanol from Aqueous Solutions Using Polydimethylsiloxane Mixed Matrix Membranes

Authors: Arian Ebneyamini, Hoda Azimi, Jules Thibaults, F. Handan Tezel

Abstract:

In this study, a modification of Hennepe model for pervaporation separation of butanol from aqueous solutions using Polydimethylsiloxane (PDMS) mixed matrix membranes has been introduced and validated by experimental data. The model was compared to the original Hennepe model and few other models which are applicable for membrane gas separation processes such as Maxwell, Lewis Nielson and Pal. Theoretical modifications for non-ideal interface morphology have been offered to predict the permeability in case of interface void, interface rigidification and pore-blockage. The model was in a good agreement with experimental data.

Keywords: butanol, PDMS, modeling, pervaporation, mixed matrix membranes

Procedia PDF Downloads 203
5074 Development of High Temperature Mo-Si-B Based In-situ Composites

Authors: Erhan Ayas, Buse Katipoğlu, Eda Metin, Rifat Yılmaz

Abstract:

The search for new materials has begun to be used even higher than the service temperature (~1150ᵒC) where nickel-based superalloys are currently used. This search should also meet the increasing demands for energy efficiency improvements. The materials studied for aerospace applications are expected to have good oxidation resistance. Mo-Si-B alloys, which have higher operating temperatures than nickel-based superalloys, are candidates for ultra-high temperature materials used in gas turbine and jet engines. Because the Moss and Mo₅SiB₂ (T2) phases exhibit high melting temperature, excellent high-temperature creep strength and oxidation resistance properties, however, low fracture toughness value at room temperature is a disadvantage for these materials, but this feature can be improved with optimum Moss phase and microstructure control. High-density value is also a problem for structural parts. For example, in turbine rotors, the higher the weight, the higher the centrifugal force, which reduces the creep life of the material. The density value of the nickel-based superalloys and the T2 phase, which is the Mo-Si-B alloy phase, is in the range of 8.6 - 9.2 g/cm³. But under these conditions, T2 phase Moss (density value 10.2 g/cm³), this value is above the density value of nickel-based superalloys. So, with some ceramic-based contributions, this value is enhanced by optimum values.

Keywords: molybdenum, composites, in-situ, mmc

Procedia PDF Downloads 50
5073 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 87
5072 On Disaggregation and Consolidation of Imperfect Quality Shipments in an Extended EPQ Model

Authors: Hung-Chi Chang

Abstract:

For an extended EPQ model with random yield, the existent study revealed that both the disaggregating and consolidating shipment policies for the imperfect quality items are independent of holding cost, and recommended a model with economic benefit by comparing the least total cost for each of the three models investigated. To better capture the real situation, we generalize the existent study to include different holding costs for perfect and imperfect quality items. Through analysis, we show that the above shipment policies are dependent on holding costs. Furthermore, we derive a simple decision rule solely based on the thresholds of problem parameters to select a superior model. The results are illustrated analytically and numerically.

Keywords: consolidating shipments, disaggregating shipments, EPQ, imperfect quality, inventory

Procedia PDF Downloads 361
5071 Enhance Customer Experience through Sustainable Development: The Case of a Natural Park

Authors: Lubica Hikkerova, Jean-Michel Sahut

Abstract:

This article aims to better understand how a natural park, with a touristic vocation, can benefit from its sustainable development approach to enhance the customer experience. For this aim, we analyze, on the one hand, the interactions between the different stakeholders in this sustainable tourism offer, their ways of cooperating to build this offer and, on the other hand, the perceptions of customers. To serve this purpose, two complementary qualitative methodologies have been conducted. As part of a systemic approach, a first study, through group discussions, was conducted with three categories of participants: (I) customers, (II) representatives of the park, communities, tourism offices and associations and 3-service providers in the park. For the second study, semi-directive interviews were realized with park managers and customers. Two levels of contributions have been found. First, we have demonstrated the value of a systemic approach to understanding sustainable tourism. Then, we developed, in the empirical part, a model of causal loops that allowed us to identify the various factors of the offer that decided potential tourists to visit the park and their impact on customer experience. The complementarity of this approach with semi-directive interviews with all the stakeholders enabled us to issue recommendations to improve the customer experience.

Keywords: sustainable tourism, systematic approach, price, park

Procedia PDF Downloads 183
5070 Digital Storytelling for Community Culture

Authors: Sariyapa Kantawan, Muanfun Kongsomsawaeng

Abstract:

Chanthaburi River community is an old mixed-culture village established in the 16th century. The town advanced more rapidly than others due to the ease of transportation at the time, which used the river as a road. Therefore, the province's first road begins here, propelling it to become an important commercial and trading center for almost a century. As a result of diverse culture, the architecture has been affected by Western, Thai, Chinese, and Vietnamese, resulting in a new and distinctive style. To share the realm of memory, digital media enable the city to communicate its history and culture. This article describes a project that combines the concepts of digital storytelling and augmented reality and connects them to Chanthaburi River Community Culture by using QR codes as makers to display 3D models on mobile screens.

Keywords: digital storytelling, community culture, river community, cultural heritage, augmented reality

Procedia PDF Downloads 40
5069 Functionalized Ultra-Soft Rubber for Soft Robotics Application

Authors: Shib Shankar Banerjeea, Andreas Ferya, Gert Heinricha, Amit Das

Abstract:

Recently, the growing need for the development of soft robots consisting of highly deformable and compliance materials emerge from the serious limitations of conventional service robots. However, one of the main challenges of soft robotics is to develop such compliance materials, which facilitates the design of soft robotic structures and, simultaneously, controls the soft-body systems, like soft artificial muscles. Generally, silicone or acrylic-based elastomer composites are used for soft robotics. However, mechanical performance and long-term reliabilities of the functional parts (sensors, actuators, main body) of the robot made from these composite materials are inferior. This work will present the development and characterization of robust super-soft programmable elastomeric materials from crosslinked natural rubber that can serve as touch and strain sensors for soft robotic arms with very high elastic properties and strain, while the modulus is altered in the kilopascal range. Our results suggest that such soft natural programmable elastomers can be promising materials and can replace conventional silicone-based elastomer for soft robotics applications.

Keywords: elastomers, soft materials, natural rubber, sensors

Procedia PDF Downloads 143
5068 Microstructural Characterization of Bitumen/Montmorillonite/Isocyanate Composites by Atomic Force Microscopy

Authors: Francisco J. Ortega, Claudia Roman, Moisés García-Morales, Francisco J. Navarro

Abstract:

Asphaltic bitumen has been largely used in both industrial and civil engineering, mostly in pavement construction and roofing membrane manufacture. However, bitumen as such is greatly susceptible to temperature variations, and dramatically changes its in-service behavior from a viscoelastic liquid, at medium-high temperatures, to a brittle solid at low temperatures. Bitumen modification prevents these problems and imparts improved performance. Isocyanates like polymeric MDI (mixture of 4,4′-diphenylmethane di-isocyanate, 2,4’ and 2,2’ isomers, and higher homologues) have shown to remarkably enhance bitumen properties at the highest in-service temperatures expected. This comes from the reaction between the –NCO pendant groups of the oligomer and the most polar groups of asphaltenes and resins in bitumen. In addition, oxygen diffusion and/or UV radiation may provoke bitumen hardening and ageing. With the purpose of minimizing these effects, nano-layered-silicates (nanoclays) are increasingly being added to bitumen formulations. Montmorillonites, a type of naturally occurring mineral, may produce a nanometer scale dispersion which improves bitumen thermal, mechanical and barrier properties. In order to increase their lipophilicity, these nanoclays are normally treated so that organic cations substitute the inorganic cations located in their intergallery spacing. In the present work, the combined effect of polymeric MDI and the commercial montmorillonite Cloisite® 20A was evaluated. A selected bitumen with penetration within the range 160/220 was modified with 10 wt.% Cloisite® 20A and 2 wt.% polymeric MDI, and the resulting ternary composites were characterized by linear rheology, X-ray diffraction (XRD) and Atomic Force Microscopy (AFM). The rheological tests evidenced a notable solid-like behavior at the highest temperatures studied when bitumen was just loaded with 10 wt.% Cloisite® 20A and high-shear blended for 20 minutes. However, if polymeric MDI was involved, the sequence of addition exerted a decisive control on the linear rheology of the final ternary composites. Hence, in bitumen/Cloisite® 20A/polymeric MDI formulations, the previous solid-like behavior disappeared. By contrast, an inversion in the order of addition (bitumen/polymeric MDI/ Cloisite® 20A) enhanced further the solid-like behavior imparted by the nanoclay. In order to gain a better understanding of the factors that govern the linear rheology of these ternary composites, a morphological and microstructural characterization based on XRD and AFM was conducted. XRD demonstrated the existence of clay stacks intercalated by bitumen molecules to some degree. However, the XRD technique cannot provide detailed information on the extent of nanoclay delamination, unless the entire fraction has effectively been fully delaminated (situation in which no peak is observed). Furthermore, XRD was unable to provide precise knowledge neither about the spatial distribution of the intercalated/exfoliated platelets nor about the presence of other structures at larger length scales. In contrast, AFM proved its power at providing conclusive information on the morphology of the composites at the nanometer scale and at revealing the structural modification that yielded the rheological properties observed. It was concluded that high-shear blending brought about a nanoclay-reinforced network. As for the bitumen/Cloisite® 20A/polymeric MDI formulations, the solid-like behavior was destroyed as a result of the agglomeration of the nanoclay platelets promoted by chemical reactions.

Keywords: Atomic Force Microscopy, bitumen, composite, isocyanate, montmorillonite.

Procedia PDF Downloads 247
5067 A Summary-Based Text Classification Model for Graph Attention Networks

Authors: Shuo Liu

Abstract:

In Chinese text classification tasks, redundant words and phrases can interfere with the formation of extracted and analyzed text information, leading to a decrease in the accuracy of the classification model. To reduce irrelevant elements, extract and utilize text content information more efficiently and improve the accuracy of text classification models. In this paper, the text in the corpus is first extracted using the TextRank algorithm for abstraction, the words in the abstract are used as nodes to construct a text graph, and then the graph attention network (GAT) is used to complete the task of classifying the text. Testing on a Chinese dataset from the network, the classification accuracy was improved over the direct method of generating graph structures using text.

Keywords: Chinese natural language processing, text classification, abstract extraction, graph attention network

Procedia PDF Downloads 80
5066 Tape-Shaped Multiscale Fiducial Marker: A Design Prototype for Indoor Localization

Authors: Marcell Serra de Almeida Martins, Benedito de Souza Ribeiro Neto, Gerson Lima Serejo, Carlos Gustavo Resque Dos Santos

Abstract:

Indoor positioning systems use sensors such as Bluetooth, ZigBee, and Wi-Fi, as well as cameras for image capture, which can be fixed or mobile. These computer vision-based positioning approaches are low-cost to implement, mainly when it uses a mobile camera. The present study aims to create a design of a fiducial marker for a low-cost indoor localization system. The marker is tape-shaped to perform a continuous reading employing two detection algorithms, one for greater distances and another for smaller distances. Therefore, the location service is always operational, even with variations in capture distance. A minimal localization and reading algorithm were implemented for the proposed marker design, aiming to validate it. The accuracy tests consider readings varying the capture distance between [0.5, 10] meters, comparing the proposed marker with others. The tests showed that the proposed marker has a broader capture range than the ArUco and QRCode, maintaining the same size. Therefore, reducing the visual pollution and maximizing the tracking since the ambient can be covered entirely.

Keywords: multiscale recognition, indoor localization, tape-shaped marker, fiducial marker

Procedia PDF Downloads 116
5065 Application of Neuro-Fuzzy Technique for Optimizing the PVC Membrane Sensor

Authors: Majid Rezayi, Sh. Shahaboddin, HNM E. Mahmud, A. Yadollah, A. Saeid, A. Yatimah

Abstract:

In this study, the adaptive neuro-fuzzy inference system (ANFIS) was applied to obtain the membrane composition model affecting the potential response of our reported polymeric PVC sensor for determining the titanium (III) ions. The performance statistics of the artificial neural network (ANN) and linear regression models for potential slope prediction of membrane composition of titanium (III) ion selective electrode were compared with ANFIS technique. The results show that the ANFIS model can be used as a practical tool for obtaining the Nerntian slope of the proposed sensor in this study.

Keywords: adaptive neuro fuzzy inference, PVC sensor, titanium (III) ions, Nerntian slope

Procedia PDF Downloads 265
5064 Disequilibrium between the Demand and Supply of Teachers of English at the Junior Secondary Schools in Gashua, Yobe State: Options for 2015 and Beyond

Authors: Clifford Irikefe Gbeyonron

Abstract:

The Nigerian educational system, which has English language as a major medium of instruction, has been designed in such a way that the cognitive, psychomotor and affective endowments of the Nigerian learner could be explored. However, the human resources that would impart the desired knowledge, skills and values in the learners seem to be in short supply. This paucity is more manifest in the area of teachers of English. As a result, this research was conducted on the demand and supply of teachers of English at the junior secondary schools in Gashua, Yobe State. The results indicate that there was dearth of teachers of English the domain under review. This thus presents a challenge that should propel English language teacher education industries to produce more teachers of English. As a result, this paper recommends that the teacher production process should make use of qualified and enthusiastic teacher trainers that would be able to inculcate in-depth linguistic and communicative competence of English language and English language teaching skills in the potential teachers of English. In addition, English language education service providers should attract and retain the trained teachers of English in the business of English language teaching in such a way that all the states of Nigeria could experience educational development.

Keywords: demand, supply, teachers of English, Yobe State

Procedia PDF Downloads 359