Search results for: supply demand model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20248

Search results for: supply demand model

15718 Modeling Child Development Factors for the Early Introduction of ICTs in Schools

Authors: K. E. Oyetade, S. D. Eyono Obono

Abstract:

One of the fundamental characteristics of Information and Communication Technology (ICT) has been the ever-changing nature of continuous release and models of ICTs with its impact on the academic, social, and psychological benefits of its introduction in schools. However, there seems to be a growing concern about its negative impact on students when introduced early in schools for teaching and learning. This study aims to design a model of child development factors affecting the early introduction of ICTs in schools in an attempt to improve the understanding of child development and introduction of ICTs in schools. The proposed model is based on a sound theoretical framework. It was designed following a literature review of child development theories and child development factors. The child development theoretical framework that fitted to the best of all child development factors was then chosen as the basis for the proposed model. This study hence found that the Jean Piaget cognitive developmental theory is the most adequate theoretical frameworks for modeling child development factors for ICT introduction in schools.

Keywords: child development factors, child development theories, ICTs, theory

Procedia PDF Downloads 393
15717 Transformation of the Business Model in an Occupational Health Care Company Embedded in an Emerging Personal Data Ecosystem: A Case Study in Finland

Authors: Tero Huhtala, Minna Pikkarainen, Saila Saraniemi

Abstract:

Information technology has long been used as an enabler of exchange for goods and services. Services are evolving from generic to personalized, and the reverse use of customer data has been discussed in both academia and industry for the past few years. This article presents the results of an empirical case study in the area of preventive health care services. The primary data were gathered in workshops, in which future personal data-based services were conceptualized by analyzing future scenarios from a business perspective. The aim of this study is to understand business model transformation in emerging personal data ecosystems. The work was done as a case study in the context of occupational healthcare. The results have implications to theory and practice, indicating that adopting personal data management principles requires transformation of the business model, which, if successfully managed, may provide access to more resources, potential to offer better value, and additional customer channels. These advantages correlate with the broadening of the business ecosystem. Expanding the scope of this study to include more actors would improve the validity of the research. The results draw from existing literature and are based on findings from a case study and the economic properties of the healthcare industry in Finland.

Keywords: ecosystem, business model, personal data, preventive healthcare

Procedia PDF Downloads 232
15716 Dynamic Process Monitoring of an Ammonia Synthesis Fixed-Bed Reactor

Authors: Bothinah Altaf, Gary Montague, Elaine B. Martin

Abstract:

This study involves the modeling and monitoring of an ammonia synthesis fixed-bed reactor using partial least squares (PLS) and its variants. The process exhibits complex dynamic behavior due to the presence of heat recycling and feed quench. One limitation of static PLS model in this situation is that it does not take account of the process dynamics and hence dynamic PLS was used. Although it showed, superior performance to static PLS in terms of prediction, the monitoring scheme was inappropriate hence adaptive PLS was considered. A limitation of adaptive PLS is that non-conforming observations also contribute to the model, therefore, a new adaptive approach was developed, robust adaptive dynamic PLS. This approach updates a dynamic PLS model and is robust to non-representative data. The developed methodology showed a clear improvement over existing approaches in terms of the modeling of the reactor and the detection of faults.

Keywords: ammonia synthesis fixed-bed reactor, dynamic partial least squares modeling, recursive partial least squares, robust modeling

Procedia PDF Downloads 375
15715 Simulation of a Cost Model Response Requests for Replication in Data Grid Environment

Authors: Kaddi Mohammed, A. Benatiallah, D. Benatiallah

Abstract:

Data grid is a technology that has full emergence of new challenges, such as the heterogeneity and availability of various resources and geographically distributed, fast data access, minimizing latency and fault tolerance. Researchers interested in this technology address the problems of the various systems related to the industry such as task scheduling, load balancing and replication. The latter is an effective solution to achieve good performance in terms of data access and grid resources and better availability of data cost. In a system with duplication, a coherence protocol is used to impose some degree of synchronization between the various copies and impose some order on updates. In this project, we present an approach for placing replicas to minimize the cost of response of requests to read or write, and we implement our model in a simulation environment. The placement techniques are based on a cost model which depends on several factors, such as bandwidth, data size and storage nodes.

Keywords: response time, query, consistency, bandwidth, storage capacity, CERN

Procedia PDF Downloads 256
15714 Analyzing Brand Related Information Disclosure and Brand Value: Further Empirical Evidence

Authors: Yves Alain Ach, Sandra Rmadi Said

Abstract:

An extensive review of literature in relation to brands has shown that little research has focused on the nature and determinants of the information disclosed by companies with respect to the brands they own and use. The objective of this paper is to address this issue. More specifically, the aim is to characterize the nature of the information disclosed by companies in terms of estimating the value of brands and to identify the determinants of that information according to the company’s characteristics most frequently tested by previous studies on the disclosure of information on intangible capital, by studying the practices of a sample of 37 French companies. Our findings suggest that companies prefer to communicate accounting, economic and strategic information in relation to their brands instead of providing financial information. The analysis of the determinants of the information disclosed on brands leads to the conclusion that the groups which operate internationally and have chosen a category 1 auditing firm to communicate more information to investors in their annual report. Our study points out that the sector is not an explanatory variable for voluntary brand disclosure, unlike previous studies on intangible capital. Our study is distinguished by the study of an element that has been little studied in the financial literature, namely the determinants of brand-related information. With regard to the effect of size on brand-related information disclosure, our research does not confirm this link. Many authors point out that large companies tend to publish more voluntary information in order to respond to stakeholder pressure. Our study also establishes that the relationship between brand information supply and performance is insignificant. This relationship is already controversial by previous research, and it shows that higher profitability motivates managers to provide more information, as this strengthens investor confidence and may increase managers' compensation. Our main contribution focuses on the nature of the inherent characteristics of the companies that disclose the most information about brands. Our results show the absence of a link between size and industry on the one hand and the supply of brand information on the other, contrary to previous research. Our analysis highlights three types of information disclosed about brands: accounting, economics and strategy. We, therefore, question the reasons that may lead companies to voluntarily communicate mainly accounting, economic and strategic information in relation to our study from one year to the next and not to communicate detailed information that would allow them to reconstitute the financial value of their brands. Our results can be useful for companies and investors. Our results highlight, to our surprise, the lack of financial information that would allow investors to understand a better valuation of brands. We believe that additional information is needed to improve the quality of accounting and financial information related to brands. The additional information provided in the special report that we recommend could be called a "report on intangible assets”.

Keywords: brand related information, brand value, information disclosure, determinants

Procedia PDF Downloads 67
15713 Intellectual Property and SMEs in the Baltic Sea Region: A Comparative Study on the Use of the Utility Model Protection

Authors: Christina Wainikka, Besrat Tesfaye

Abstract:

Several of the countries in the Baltic Sea region are ranked high in international innovations rankings, such as the Global Innovation Index and European Innovation Scoreboard. There are however some concerns in the performance of different countries. For example, there is a widely spread notion about “The Swedish Paradox”. Sweden is ranked high due to investments in R&D and patent activity, but the outcome is not as high as could be expected. SMEs in Sweden are also below EU average when it comes to registering intellectual property rights such as patents and trademarks. This study is concentrating on the protection of utility model. This intellectual property right does not exist in Sweden, but in for example Finland and Germany. The utility model protection is sometimes referred to as a “patent light” since it is easier to obtain than the patent protection but at the same time does cover technical solutions. In examining statistics on patent activities and activities in registering utility models it is clear that utility model protection is scarcely used in the countries that have the protection. In Germany 10 577 applications were made in 2021. In Finland there were 259 applications made in 2021. This can be compared with patent applications that were 58 568 in Germany in 2021 and 1 662 in Finland in 2021. In Sweden there has never been a protection for utility models. The only protection for technical solutions is patents and business secrets. The threshold for obtaining a patent is high, due to the legal requirements and the costs. The patent protection is there for often not chosen by SMEs in Sweden. This study examines whether the protection of utility models in other countries in the Baltic region provide SMEs in these countries with better options to protect their innovations. The legal methodology is comparative law. In order to study the effects of the legal differences statistics are examined and interviews done with SMEs from different industries.

Keywords: baltic sea region, comparative law, SME, utility model

Procedia PDF Downloads 97
15712 The Development of Private Housing Schemes to Address the Housing Problem: A Case Study of Islamabad

Authors: Zafar Iqbal Zafar, Abdul Waheed

Abstract:

The Capital Development Authority (CDA) Ordinance 1960 requires CDA to acquire land for the provision of housing in Islamabad. However, the pace of residential development was slow and the demand for housing was increasing rapidly. To resolve the growing housing problem, CDA involved the private sector in the development of housing schemes. Detailed bye-laws for regulation of private housing schemes were prepared and these bylaws were called “Modalities & Procedures”. This paper explains how the Modalities and Procedures of CDA have been successful in regulating the development of private housing schemes in Islamabad.

Keywords: housing schemes, master plan, development works, zoning regulations

Procedia PDF Downloads 182
15711 Grading Fourteen Zones of Isfahan in Terms of the Impact of Globalization on the Urban Fabric of the City, Using the TOPSIS Model

Authors: A. Zahedi Yeganeh, A. Khademolhosseini, R. Mokhtari Malekabadi

Abstract:

Undoubtedly one of the most far-reaching and controversial topics considered in the past few decades, has been globalization. Globalization lies in the essence of the modern culture. It is a complex and rapidly expanding network of links and mutual interdependence that is an aspect of modern life; though some argue that this link existed since the beginning of human history. If we consider globalization as a dynamic social process in which the geographical constraints governing the political, economic, social and cultural relationships have been undermined, it might not be possible to simply describe its impact on the urban fabric. But since in this phenomenon the increase in communications of societies (while preserving the main cultural - regional characteristics) with one another and the increase in the possibility of influencing other societies are discussed, the need for more studies will be felt. The main objective of this study is to grade based on some globalization factors on urban fabric applying the TOPSIS model. The research method is descriptive - analytical and survey. For data analysis, the TOPSIS model and SPSS software were used and the results of GIS software with fourteen cities are shown on the map. The results show that the process of being influenced by the globalization of the urban fabric of fourteen zones of Isfahan was not similar and there have been large differences in this respect between city zones; the most affected areas are zones 5, 6 and 9 of the municipality and the least impact has been on the zones 4 and 3 and 2.

Keywords: grading, globalization, urban fabric, 14 zones of Isfahan, TOPSIS model

Procedia PDF Downloads 298
15710 An Improved Single Point Closure Model Based on Dissipation Anisotropy for Geophysical Turbulent Flows

Authors: A. P. Joshi, H. V. Warrior, J. P. Panda

Abstract:

This paper is a continuation of the work carried out by various turbulence modelers in Oceanography on the topic of oceanic turbulent mixing. It evaluates the evolution of ocean water temperature and salinity by the appropriate modeling of turbulent mixing utilizing proper prescription of eddy viscosity. Many modelers in past have suggested including terms like shear, buoyancy and vorticity to be the parameters that decide the slow pressure strain correlation. We add to it the fact that dissipation anisotropy also modifies the correlation through eddy viscosity parameterization. This recalibrates the established correlation constants slightly and gives improved results. This anisotropization of dissipation implies that the critical Richardson’s number increases much beyond unity (to 1.66) to accommodate enhanced mixing, as is seen in reality. The model is run for a couple of test cases in the General Ocean Turbulence Model (GOTM) and the results are presented here.

Keywords: Anisotropy, GOTM, pressure-strain correlation, Richardson critical number

Procedia PDF Downloads 158
15709 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty

Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)

Abstract:

In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.

Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator

Procedia PDF Downloads 70
15708 Assessment of Urban Infrastructure and Health Using Principal Component Analysis and Geographic Information System: A Case of Ahmedabad, India

Authors: Anusha Vaddiraj Pallapu

Abstract:

Across the globe, there is a steady increase in people residing in urban areas. Due to this increase in urban population, urban health is affecting. The major issues identified like overcrowding, air pollution, unhealthy diet, inadequate infrastructure, poor solid waste management systems and insufficient access to health facilities, these issues are gradually clearly observed in health statistics of diseases and deaths rapidly increase in urban areas. Therefore, the present study aims to assess the health statistics and infrastructure services at urban areas to know the cause and effect between Infrastructure, its management and diseases (water borne). Most of the Indian cities have the municipal boundaries, which authorized by their respective municipal corporations and development authorities. Generally, cities have various zones under which municipal wards exist. The paper focuses on the city Ahmedabad, at Gujarat state. Ahmedabad Municipal Corporation (AMC) is divided into six zones namely Central zone, West zone, New-West zone, East zone, North zone, and South zone. Each zone includes various wards within it. Incidence of diseases in Ahmadabad which are linked to infrastructure was identified such as water-borne diseases. Later on, the occurrence of water-borne diseases at urban area was examined at each zone level. The study methodology follows four steps i.e. 1) Pre-Field literature study: Study on Sewerage system in urban areas and its best practices and public health status globally and Indian scenario; 2) Field study: Data collection and interviews of stakeholders regarding heal status and issues at each zone and ward level; 3) Post field: Data analysis with qualitative description of each ward of zones, followed by correlation coefficient analysis between sewerage coverage, diseases and density of each ward using geographic information system mapping (GIS); 4) Identification of reasons: Affected health on each of zone and wards followed by correlation analysis on each reason. The results reveal that the health conditions in Ahmedabad municipal zones or boundaries are effected due to the slums created by the migrated people from various rural and urban areas. It is also observed that due to increase in population water supply and sewerage management is affecting. The overall effect on infrastructure is creating the health diseases which detailed in the paper using geographical information system in Indian city.

Keywords: infrastructure, municipal wards, GIS, water supply, sewerage, medical facilities, water borne diseases

Procedia PDF Downloads 193
15707 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 217
15706 Investigation on Mesh Sensitivity of a Transient Model for Nozzle Clogging

Authors: H. Barati, M. Wu, A. Kharicha, A. Ludwig

Abstract:

A transient model for nozzle clogging has been developed and successfully validated against a laboratory experiment. Key steps of clogging are considered: transport of particles by turbulent flow towards the nozzle wall; interactions between fluid flow and nozzle wall, and the adhesion of the particle on the wall; the growth of the clog layer and its interaction with the flow. The current paper is to investigate the mesh (size and type) sensitivity of the model in both two and three dimensions. It is found that the algorithm for clog growth alone excluding the flow effect is insensitive to the mesh type and size, but the calculation including flow becomes sensitive to the mesh quality. The use of 2D meshes leads to overestimation of the clog growth because the 3D nature of flow in the boundary layer cannot be properly solved by 2D calculation. 3D simulation with tetrahedron mesh can also lead to an error estimation of the clog growth. A mesh-independent result can be achieved with hexahedral mesh, or at least with triangular prism (inflation layer) for near-wall regions.

Keywords: clogging, continuous casting, inclusion, simulation, submerged entry nozzle

Procedia PDF Downloads 269
15705 Establishment and Application of Numerical Simulation Model for Shot Peen Forming Stress Field Method

Authors: Shuo Tian, Xuepiao Bai, Jianqin Shang, Pengtao Gai, Yuansong Zeng

Abstract:

Shot peen forming is an essential forming process for aircraft metal wing panel. With the development of computer simulation technology, scholars have proposed a numerical simulation method of shot peen forming based on stress field. Three shot peen forming indexes of crater diameter, shot speed and surface coverage are required as simulation parameters in the stress field method. It is necessary to establish the relationship between simulation and experimental process parameters in order to simulate the deformation under different shot peen forming parameters. The shot peen forming tests of the 2024-T351 aluminum alloy workpieces were carried out using uniform test design method, and three factors of air pressure, feed rate and shot flow were selected. The second-order response surface model between simulation parameters and uniform test factors was established by stepwise regression method using MATLAB software according to the results. The response surface model was combined with the stress field method to simulate the shot peen forming deformation of the workpiece. Compared with the experimental results, the simulated values were smaller than the corresponding test values, the maximum and average errors were 14.8% and 9%, respectively.

Keywords: shot peen forming, process parameter, response surface model, numerical simulation

Procedia PDF Downloads 63
15704 Evaluating the Use of Digital Art Tools for Drawing to Enhance Artistic Ability and Improve Digital Skill among Junior School Students

Authors: Aber Salem Aboalgasm, Rupert Ward

Abstract:

This study investigated some results of the use of digital art tools by junior school children in order to discover if these tools could promote artistic ability and creativity. The study considers the ease of use and usefulness of the tools as well as how to assess artwork produced by digital means. As the use of these tools is a relatively new development in Art education, this study may help educators in their choice of which tools to use and when to use them. The study also aims to present a model for the assessment of students’ artistic development and creativity by studying their artistic activity. This model can help in determining differences in students’ creative ability and could be useful both for teachers, as a means of assessing digital artwork, and for students, by providing the motivation to use the tools to their fullest extent. Sixteen students aged nine to ten years old were observed and recorded while they used the digital drawing tools. The study found that, according to the students’ own statements, it was not the ease of use but the successful effects the tools provided which motivated the children to use them.

Keywords: artistic ability, creativity, drawing digital tool, TAM model, psychomotor domain

Procedia PDF Downloads 316
15703 Modelling Phytoremediation Rates of Aquatic Macrophytes in Aquaculture Effluent

Authors: E. A. Kiridi, A. O. Ogunlela

Abstract:

Pollutants from aquacultural practices constitute environmental problems and phytoremediation could offer cheaper environmentally sustainable alternative since equipment using advanced treatment for fish tank effluent is expensive to import, install, operate and maintain, especially in developing countries. The main objective of this research was, therefore, to develop a mathematical model for phytoremediation by aquatic plants in aquaculture wastewater. Other objectives were to evaluate the retention times on phytoremediation rates using the model and to measure the nutrient level of the aquaculture effluent and phytoremediation rates of three aquatic macrophytes, namely; water hyacinth (Eichornia crassippes), water lettuce (Pistial stratoites) and morning glory (Ipomea asarifolia). A completely randomized experimental design was used in the study. Approximately 100 g of each macrophyte were introduced into the hydroponic units and phytoremediation indices monitored at 8 different intervals from the first to the 28th day. The water quality parameters measured were pH and electrical conductivity (EC). Others were concentration of ammonium–nitrogen (NH₄⁺ -N), nitrite- nitrogen (NO₂⁻ -N), nitrate- nitrogen (NO₃⁻ -N), phosphate –phosphorus (PO₄³⁻ -P), and biomass value. The biomass produced by water hyacinth was 438.2 g, 600.7 g, 688.2 g and 725.7 g at four 7–day intervals. The corresponding values for water lettuce were 361.2 g, 498.7 g, 561.2 g and 623.7 g and for morning glory were 417.0 g, 567.0 g, 642.0 g and 679.5g. Coefficient of determination was greater than 80% for EC, TDS, NO₂⁻ -N, NO₃⁻ -N and 70% for NH₄⁺ -N using any of the macrophytes and the predicted values were within the 95% confidence interval of measured values. Therefore, the model is valuable in the design and operation of phytoremediation systems for aquaculture effluent.

Keywords: aquaculture effluent, macrophytes, mathematical model, phytoremediation

Procedia PDF Downloads 207
15702 Assessing the Cumulative Impact of PM₂.₅ Emissions from Power Plants by Using the Hybrid Air Quality Model and Evaluating the Contributing Salient Factor in South Taiwan

Authors: Jackson Simon Lusagalika, Lai Hsin-Chih, Dai Yu-Tung

Abstract:

Particles with an aerodynamic diameter of 2.5 meters or less are referred to as "fine particulate matter" (PM₂.₅) are easily inhaled and can go deeper into the lungs than other particles in the atmosphere, where it may have detrimental health consequences. In this study, we use a hybrid model that combined CMAQ and AERMOD as well as initial meteorological fields from the Weather Research and Forecasting (WRF) model to study the impact of power plant PM₂.₅ emissions in South Taiwan since it frequently experiences higher PM₂.₅ levels. A specific date of March 3, 2022, was chosen as a result of a power outage that prompted the bulk of power plants to shut down. In some way, it is not conceivable anywhere in the world to turn off the power for the sole purpose of doing research. Therefore, this catastrophe involving a power outage and the shutdown of power plants offers a great occasion to evaluate the impact of air pollution driven by this power sector. As a result, four numerical experiments were conducted in the study using the Continuous Emission Data System (CEMS), assuming that the power plants continued to function normally after the power outage. The hybrid model results revealed that power plants have a minor impact in the study region. However, we examined the accumulation of PM₂.₅ in the study and discovered that once the vortex at 925hPa was established and moved to the north of Taiwan's coast, the study region experienced higher observed PM₂.₅ concentrations influenced by meteorological factors. This study recommends that decision-makers take into account not only control techniques, specifically emission reductions, but also the atmospheric and meteorological implications for future investigations.

Keywords: PM₂.₅ concentration, powerplants, hybrid air quality model, CEMS, Vorticity

Procedia PDF Downloads 62
15701 Production of Alcohol from Sweet Potato

Authors: Abhishek S. Shete

Abstract:

There is nothing new in the use of alcohol made from root crops as a motor fuel. Alcohol is an excellent alternative motor fuel for petrol engines. The reason alcohol fuel has not been fully exploited is that, up until now; gasoline has been cheap, available, and easy to produce. However, nowadays, crude oil is getting scarce, and the historic price difference between alcohol and gasoline is getting narrower. Alcohol fuel can be an important part of the solution for Rwanda because there is tremendous scope to use bulk production of sweet potato into alcohol. The total sweet potato production in both seasons is found to be 1.607.296 tones/year. The average productivity of sweet potato in the country irrespective of seasons is found to be 8.9 tones/ha. If all of the available agricultural surplus were converted to ethanol, alcohol would supply less than 5% of motor fuel needs.

Keywords: root crops, sweet potato, surplus, alcohol

Procedia PDF Downloads 411
15700 Dynamic Thermal Modelling of a PEMFC-Type Fuel Cell

Authors: Marco Avila Lopez, Hasnae Ait-Douchi, Silvia De Los Santos, Badr Eddine Lebrouhi, Pamela Ramírez Vidal

Abstract:

In the context of the energy transition, fuel cell technology has emerged as a solution for harnessing hydrogen energy and mitigating greenhouse gas emissions. An in-depth study was conducted on a PEMFC-type fuel cell, with an initiation of an analysis of its operational principles and constituent components. Subsequently, the modelling of the fuel cell was undertaken using the Python programming language, encompassing both steady-state and transient regimes. In the case of the steady-state regime, the physical and electrochemical phenomena occurring within the fuel cell were modelled, with the assumption of uniform temperature throughout all cell compartments. Parametric identification was carried out, resulting in a remarkable mean error of only 1.62% when the model results were compared to experimental data documented in the literature. The dynamic model that was developed enabled the scrutiny of the fuel cell's response in terms of temperature and voltage under varying current conditions.

Keywords: fuel cell, modelling, dynamic, thermal model, PEMFC

Procedia PDF Downloads 69
15699 Interpretable Deep Learning Models for Medical Condition Identification

Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji

Abstract:

Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.

Keywords: deep learning, interpretability, attention, big data, medical conditions

Procedia PDF Downloads 82
15698 Closed-Form Solutions for Nanobeams Based on the Nonlocal Euler-Bernoulli Theory

Authors: Francesco Marotti de Sciarra, Raffaele Barretta

Abstract:

Starting from nonlocal continuum mechanics, a thermodynamically new nonlocal model of Euler-Bernoulli nanobeams is provided. The nonlocal variational formulation is consistently provided and the governing differential equation for transverse displacement are presented. Higher-order boundary conditions are then consistently derived. An example is contributed in order to show the effectiveness of the proposed model.

Keywords: Bernoulli-Euler beams, nanobeams, nonlocal elasticity, closed-form solutions

Procedia PDF Downloads 356
15697 A Development of Creative Instruction Model through Digital Media

Authors: Kathaleeya Chanda, Panupong Chanplin, Suppara Charoenpoom

Abstract:

This purposes of the development of creative instruction model through digital media are to: 1) enable learners to learn from instruction media application; 2) help learners implementing instruction media correctly and appropriately; and 3) facilitate learners to apply technology for searching information and practicing skills to implement technology creatively. The sample group consists of 130 cases of secondary students studying in Bo Kluea School, Bo Kluea Nuea Sub-district, Bo Kluea District, Nan Province. The probability sampling was selected through the simple random sampling and the statistics used in this research are percentage, mean, standard deviation and one group pretest – posttest design. The findings are summarized as follows: The congruence index of instruction media for occupation and technology subjects is appropriate. By comparing between learning achievements before implementing the instruction media and learning achievements after implementing the instruction media, it is found that the posttest achievements are higher than the pretest achievements with statistical significance at the level of .05. For the learning achievements from instruction media implementation, pretest mean is 16.24 while posttest mean is 26.28. Besides, pretest and posttest results are compared and differences of mean are tested, the test results show that the posttest achievements are higher than the pretest achievements with statistical significance at the level of .05. This can be interpreted that the learners achieve better learning progress.

Keywords: teaching learning model, digital media, creative instruction model, Bo Kluea school

Procedia PDF Downloads 129
15696 Understanding Language Teachers’ Motivations towards Research Engagement: A Qualitative Case Study of Vietnamese Tertiary English Teachers

Authors: My T. Truong

Abstract:

Among various professional development (PD) options available for English as a second language (ESL) teachers, especially those at the tertiary level, research engagement has been recently recommended as an innovative model with a transformative force for both individual teachers’ PD and wider school improvement. Teachers who conduct research themselves tend to develop critical and analytical thinking about their instructional practices, and enhance their ability to make autonomous pedagogical judgments and decisions. With such capabilities, teacher researchers are thus more likely to contribute to curriculum innovation of their schools and improvement of the whole educational process. The extent to which ESL teachers are engaged in research, however, depends largely on their research motivation, which can not only decide teachers’ choice of a PD activity to pursue but also affect the degree and duration of effort they are willing to invest in pursuing it. To understand language teachers’ research practices, and to inform educational authorities about ways to promote research culture among their ESL teaching staff, it is therefore vital to investigate teachers’ research motivation. Despite its importance as such, this individual difference construct has not been paid due attention especially in the ESL contexts. To fill this gap, this study aims to explore Vietnamese tertiary ESL teachers’ motivations towards research. Guided by the self-determination theory and the process model of motivation, it investigates teachers’ initial motivations for conducting research, and the factors that sustained or degraded their motivation during the research engagement process. Adopting a qualitative case-study approach, the study collected longitudinal data via semi-structured interviews and guided diary entries from three ESL tertiary teachers who were conducting their own research project. The respondents attended two semi-structured interviews (one at the beginning of their project, and the other one three months afterwards); and wrote six guided diary entries between the two interviews. The results confirm the significant role motivation plays in driving teachers to initiate and maintain their participation in research, and challenge some common assumptions in teacher motivation literature. For instance, the quality of the past and actual research experience unsurprisingly emerged as an important factor that both motivated and demotivated teachers in their research engagement process. Unlike general suggestions in the motivation literature however, external demand was found in this study to be a critical motivation sustaining factor while intrinsic research interest actually did not suffice to help a teacher fulfil his research endeavor. With such findings, the study is expected to widen the motivational perspective in understanding language teacher research practice given the paucity of related studies. Practically, it is hoped to enable teacher educators, PD program designers and educational policy makers in Vietnam and similar contexts to approach the question of whether and how to promote research activities among ESL teachers feasibly. For practicing and in-service teachers, the findings may elucidate to them the motivational conditions in which they can be research engaged, and the motivational factors that might hinder or encourage them in so doing.

Keywords: teacher motivation, teacher professional development, teacher research engagement, English as a second language (ESL)

Procedia PDF Downloads 167
15695 Study and Simulation of a Dynamic System Using Digital Twin

Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli

Abstract:

Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.

Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models

Procedia PDF Downloads 127
15694 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 124
15693 Novel Pyrimidine Based Semicarbazones: Confirmation of Four Binding Site Pharmacophoric Model Hypothesis for Antiepileptic Activity

Authors: Harish Rajak, Swati Singh

Abstract:

A series of novel pyrimidine based semicarbazone were designed and synthesized on the basis of semicarbazone based pharmacophoric model to satisfy the structural prerequisite crucial for antiepileptic activity. The semicarbazones based pharmacophoric model consists of following four essential binding sites: (i) An aryl hydrophobic binding site with halo substituent; (ii) A hydrogen bonding domain; (iii) An electron donor group and (iv) Another hydrophobic-hydrophilic site controlling the pharmacokinetic features of the anticonvulsant. The aryl semicarbazones has been recognized as a structurally novel class of compounds with remarkable anticonvulsant activity. In the present study, all the test semicarbazones were subjected to molecular docking using Glide v5.8. Some of the compounds were found to interact with ARG192, GLU270 and THR353 residues of 1OHV protein, present in GABA-AT receptor. The chemical structures of the synthesized molecules were characterized by elemental and spectral (IR, 1H NMR, 13C NMR and MS) analysis. The anticonvulsant activities of the compounds were investigated using maximal electroshock seizure (MES) and subcutaneous pentylenetrtrazole (scPTZ) models. The neurotoxicity was evaluated in mice by the rotorod test. The attempts were also made to establish structure-activity relationships among synthesized compounds. The results of the present study confirmed that the pharmacophore model with four binding sites is essential for antiepileptic activity.

Keywords: pyrimidine, semicarbazones, anticonvulsant activity, neurotoxicity

Procedia PDF Downloads 241
15692 The Effectiveness of Computerized Dynamic Listening Assessment Informed by Attribute-Based Mediation Model

Authors: Yaru Meng

Abstract:

The study contributes to the small but growing literature around computerized approaches to dynamic assessment (C-DA), wherein individual items are accompanied by mediating prompts. Mediation in the current computerized dynamic listening assessment (CDLA) was informed by an attribute-based mediation model (AMM) that identified the underlying L2 listening cognitive abilities and associated descriptors. The AMM served to focus mediation during C-DA on particular cognitive abilities with a goal of specifying areas of learner difficulty. 86 low-intermediate L2 English learners from a university in China completed three listening assessments, with an experimental group receiving the CLDA system and a control group a non-dynamic assessment. As an assessment, the use of the AMM in C-DA generated detailed diagnoses for each learner. In addition, both within- and between-group repeated ANOVA found greater gains at the level of specific attributes among C-DA learners over the course of a 5-week study. Directions for future research are discussed.

Keywords: computerized dynamic assessment, effectiveness, English as foreign language listening, attribute-based mediation model

Procedia PDF Downloads 198
15691 Study on Optimization of Air Infiltration at Entrance of a Commercial Complex in Zhejiang Province

Authors: Yujie Zhao, Jiantao Weng

Abstract:

In the past decade, with the rapid development of China's economy, the purchasing power and physical demand of residents have been improved, which results in the vast emergence of public buildings like large shopping malls. However, the architects usually focus on the internal functions and streamlines of these buildings, ignoring the impact of the environment on the subjective feelings of building users. Only in Zhejiang province, the infiltration of cold air in winter frequently occurs at the entrance of sizeable commercial complex buildings that have been in operation, which will affect the environmental comfort of the building lobby and internal public spaces. At present, to reduce these adverse effects, it is usually adopted to add active equipment, such as setting air curtains to block air exchange or adding heating air conditioners. From the perspective of energy consumption, the infiltration of cold air into the entrance will increase the heat consumption of indoor heating equipment, which will indirectly cause considerable economic losses during the whole winter heating stage. Therefore, it is of considerable significance to explore the suitable entrance forms for improving the environmental comfort of commercial buildings and saving energy. In this paper, a commercial complex with apparent cold air infiltration problem in Hangzhou is selected as the research object to establish a model. The environmental parameters of the building entrance, including temperature, wind speed, and infiltration air volume, are obtained by Computational Fluid Dynamics (CFD) simulation, from which the heat consumption caused by the natural air infiltration in the winter and its potential economic loss is estimated as the objective metric. This study finally obtains the optimization direction of the building entrance form of the commercial complex by comparing the simulation results of other local commercial complex projects with different entrance forms. The conclusions will guide the entrance design of the same type of commercial complex in this area.

Keywords: air infiltration, commercial complex, heat consumption, CFD simulation

Procedia PDF Downloads 118
15690 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children

Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco

Abstract:

Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.

Keywords: evolutionary computation, feature selection, classification, clustering

Procedia PDF Downloads 353
15689 Effectiveness Factor for Non-Catalytic Gas-Solid Pyrolysis Reaction for Biomass Pellet Under Power Law Kinetics

Authors: Haseen Siddiqui, Sanjay M. Mahajani

Abstract:

Various important reactions in chemical and metallurgical industries fall in the category of gas-solid reactions. These reactions can be categorized as catalytic and non-catalytic gas-solid reactions. In gas-solid reaction systems, heat and mass transfer limitations put an appreciable influence on the rate of the reaction. The consequences can be unavoidable for overlooking such effects while collecting the reaction rate data for the design of the reactor. Pyrolysis reaction comes in this category that involves the production of gases due to the interaction of heat and solid substance. Pyrolysis is also an important step in the gasification process and therefore, the gasification reactivity majorly influenced by the pyrolysis process that produces the char, as a feed for the gasification process. Therefore, in the present study, a non-isothermal transient 1-D model is developed for a single biomass pellet to investigate the effect of heat and mass transfer limitations on the rate of pyrolysis reaction. The obtained set of partial differential equations are firstly discretized using the concept of ‘method of lines’ to obtain a set of ordinary differential equation with respect to time. These equations are solved, then, using MATLAB ode solver ode15s. The model is capable of incorporating structural changes, porosity variation, variation in various thermal properties and various pellet shapes. The model is used to analyze the effectiveness factor for different values of Lewis number and heat of reaction (G factor). Lewis number includes the effect of thermal conductivity of the solid pellet. Higher the Lewis number, the higher will be the thermal conductivity of the solid. The effectiveness factor was found to be decreasing with decreasing Lewis number due to the fact that smaller Lewis numbers retard the rate of heat transfer inside the pellet owing to a lower rate of pyrolysis reaction. G factor includes the effect of the heat of reaction. Since the pyrolysis reaction is endothermic in nature, the G factor takes negative values. The more the negative value higher will be endothermic nature of the pyrolysis reaction. The effectiveness factor was found to be decreasing with more negative values of the G factor. This behavior can be attributed to the fact that more negative value of G factor would result in more energy consumption by the reaction owing to a larger temperature gradient inside the pellet. Further, the analytical expressions are also derived for gas and solid concentrations and effectiveness factor for two limiting cases of the general model developed. The two limiting cases of the model are categorized as the homogeneous model and unreacted shrinking core model.

Keywords: effectiveness factor, G-factor, homogeneous model, lewis number, non-catalytic, shrinking core model

Procedia PDF Downloads 118